A pair of astrophysicists at the Rochester Institute of Technology has found via simulations that some black holes might be traveling through space at nearly one-tenth the speed of light. In their study, reported in Physical Review Letters, James Healy and Carlos Lousto used supercomputer simulations to determine how fast black holes might be moving after formation due to a collision between two smaller black holes.
Prior research has shown that it is possible for two black holes to smash into each other. And when they do, they tend to merge. Mergers generate gravitational waves, and an ensuing recoil can occur in the opposite direction, similar to the recoil of a gun. The energy of that recoil can send the resulting black hole hurtling through space at incredible speeds.
Prior research has suggested such black holes may reach top speeds of approximately 5,000 km/sec. In this new effort, the researchers took a closer look at black hole speeds to determine just how fast they might travel after merging.
To that end, the researchers created a mathematical simulation. One of the main data points involved the angle at which the two black holes approached one another prior to merging. Prior research has shown that for all but a direct head-on collision, there is likely to be a period of time when the two black holes circle each other before merging.
The researchers ran their simulation on a supercomputer to calculate the results of merging by black holes that approach each other from 1,300 different angles, including direct collisions and close flybys.
They found that under the best-case scenario, grazing collisions, it should be possible for a recoil to send the merged black hole zipping through space at approximately 28,500 kilometers per second—a rate that would send it the distance between the Earth and the moon in just 13 seconds.
If you’ve never watched it, Kirby Ferguson’s “Everything is a Remix” series (which was recently updated from the original version that came out years ago) is an excellent look at how stupid our copyright laws are, and how they have really warped our view of creativity. As the series makes clear, creativity is all about remixing: taking inspiration and bits and pieces from other parts of culture and remixing them into something entirely new. All creativity involves this in some manner or another. There is no truly unique creativity.
And yet, copyright law assumes the opposite is true. It assumes that most creativity is entirely unique, and when remix and inspiration get too close, the powerful hand of the law has to slap people down.
[…]
It would have been nice if society had taken this issue seriously back then, recognized that “everything is a remix,” and that encouraging remixing and reusing the works of others to create something new and transformative was not just a good thing, but one that should be supported. If so, we might not be in the utter shitshow that is the debate over generative art from AI these days, in which many creators are rushing to AI to save them, even though that’s not what copyright was designed to do, nor is it a particularly useful tool in that context.
[…]
The moral panic is largely an epistemological crisis: We don’t have a socially acceptable status for the legibility of the remix as art-in-it’s-own-right. Instead of properly appreciating the remix and the art of the DJ, the remix, or the meme cultures, we have shoehorned all the cultural properties associated onto an 1800’s sheet music publishing -based model of artistic credibility. The fit was never really good, but no-one really cared because the scenes were small, underground and their breaking the rules was largely out-of-sight.
[…]
AI art tools are simply resurfacing an old problem we left behind unresolved during the 1980’s to early 2000’s. Now it’s time for us to blow the dust off these old books and apply what was learned to the situation we have at our hands now.
We should not forget the modern electronic dance music industry has already developed models that promote new artists via remixes of their work from more established artists. These real-world examples combined with the theoretical frameworks above should help us to explore a refreshed model of artistic credibility, where value is assigned to both the original artists and the authors of remixers
[…]
Art, especially popular forms of it, has always been a lot about transformation: Taking what exists and creating something that works in this particular context. In forms of art emphasizing the distinctiveness of the original less, transformation becomes the focus of the artform instead.
[…]
There are a lot of questions about how that would actually work in practice, but I do think this is a useful framework for thinking about some of these questions, challenging some existing assumptions, and trying to rethink the system into one that is actually helping creators and helping to enable more art to be created, rather than trying to leverage a system originally developed to provide monopolies to gatekeepers into one that is actually beneficial to the public who want to experience art, and creators who wish to make art.
Over the years we’ve covered a lot of attempts by relatively clueless governments and politicians to enact think-of-the-children internet censorship or surveillance legislation, but there’s a law from France in the works which we think has the potential to be one of the most sinister we’ve seen yet.
It’s likely that if they push this law through it will cause significant consternation over the rest of the European continent. We’d expect those European countries with less liberty-focused governments to enthusiastically jump on the bandwagon, and we’d also expect the European hacker community to respond with a plethora of ways for their French cousins to evade the snooping eyes of Paris. We have little confidence in the wisdom of the EU parliament in Brussels when it comes to ill-thought-out laws though, so we hope this doesn’t portend a future dark day for all Europeans. We find it very sad to see in any case, because France on the whole isn’t that kind of place.
Copyright issues have dogged AI since chatbot tech gained mass appeal, whether it’s accusations of entire novels being scraped to train ChatGPT or allegations that Microsoft and GitHub’s Copilot is pilfering code.
But one thing is for sure after a ruling [PDF] by the United States District Court for the District of Columbia – AI-created works cannot be copyrighted.
You’d think this was a simple case, but it has been rumbling on for years at the hands of one Stephen Thaler, founder of Missouri neural network biz Imagination Engines, who tried to copyright artwork generated by what he calls the Creativity Machine, a computer system he owns. The piece, A Recent Entrance to Paradise, pictured below, was reproduced on page 4 of the complaint [PDF]:
The US Copyright Office refused the application because copyright laws are designed to protect human works. “The office will not register works ‘produced by a machine or mere mechanical process’ that operates ‘without any creative input or intervention from a human author’ because, under the statute, ‘a work must be created by a human being’,” the review board told Thaler’s lawyer after his second attempt was rejected last year.
This was not a satisfactory response for Thaler, who then sued the US Copyright Office and its director, Shira Perlmutter. “The agency actions here were arbitrary, capricious, an abuse of discretion and not in accordance with the law, unsupported by substantial evidence, and in excess of Defendants’ statutory authority,” the lawsuit claimed.
But handing down her ruling on Friday, Judge Beryl Howell wouldn’t budge, pointing out that “human authorship is a bedrock requirement of copyright” and “United States copyright law protects only works of human creation.”
“Non-human actors need no incentivization with the promise of exclusive rights under United States law, and copyright was therefore not designed to reach them,” she wrote.
Though she acknowledged the need for copyright to “adapt with the times,” she shut down Thaler’s pleas by arguing that copyright protection can only be sought for something that has “an originator with the capacity for intellectual, creative, or artistic labor. Must that originator be a human being to claim copyright protection? The answer is yes.”
Unsurprisingly Thaler’s legal people took an opposing view. “We strongly disagree with the district court’s decision,” University of Surrey Professor Ryan Abbott told The Register.
“In our view, the law is clear that the American public is the primary beneficiary of copyright law, and the public benefits when the generation and dissemination of new works are promoted, regardless of how those works are made. We do plan to appeal.”
This is just one legal case Thaler is involved in. Earlier this year, the US Supreme Court also refused to hear arguments that AI algorithms should be recognized by law as inventors on patent filings, once again brought by Thaler.
He sued the US Patent and Trademark Office (USPTO) in 2020 because patent applications he had filed on behalf of another of his AI systems, DABUS, were rejected. The USPTO refused to accept them as it could only consider inventions from “natural persons.”
That lawsuit was quashed then was taken to the US Court of Appeals, where it lost again. Thaler’s team finally turned to the Supreme Court, which wouldn’t give it the time of day.
When The Register asked Thaler to comment on the US Copyright Office defeat, he told us: “What can I say? There’s a storm coming.”
Obtaining useful work from random fluctuations in a system at thermal equilibrium has long been considered impossible. In fact, in the 1960s eminent American physicist Richard Feynman effectively shut down further inquiry after he argued in a series of lectures that Brownian motion, or the thermal motion of atoms, cannot perform useful work.
Now, a new study published in Physical Review E titled “Charging capacitors from thermal fluctuations using diodes” has proven that Feynman missed something important.
Three of the paper’s five authors are from the University of Arkansas Department of Physics. According to first author Paul Thibado, their study rigorously proves that thermal fluctuations of freestanding graphene, when connected to a circuit with diodes having nonlinear resistance and storage capacitors, does produce useful work by charging the storage capacitors.
The authors found that when the storage capacitors have an initial charge of zero, the circuit draws power from the thermal environment to charge them.
The team then showed that the system satisfies both the first and second laws of thermodynamics throughout the charging process. They also found that larger storage capacitors yield more stored charge and that a smaller graphene capacitance provides both a higher initial rate of charging and a longer time to discharge. These characteristics are important because they allow time to disconnect the storage capacitors from the energy harvesting circuit before the net charge is lost.
This latest publication builds on two of the group’s previous studies. The first was published in a 2016 Physical Review Letters. In that study, Thibado and his co-authors identified the unique vibrational properties of graphene and its potential for energy harvesting.
The second was published in a 2020 Physical Review E article in which they discuss a circuit using graphene that can supply clean, limitless power for small devices or sensors.
This latest study progresses even further by establishing mathematically the design of a circuit capable of gathering energy from the heat of the earth and storing it in capacitors for later use.
“Theoretically, this was what we set out to prove,” Thibado explained. “There are well-known sources of energy, such as kinetic, solar, ambient radiation, acoustic, and thermal gradients. Now there is also nonlinear thermal power. Usually, people imagine that thermal power requires a temperature gradient. That is, of course, an important source of practical power, but what we found is a new source of power that has never existed before. And this new power does not require two different temperatures because it exists at a single temperature.”
In addition to Thibado, co-authors include Pradeep Kumar, John Neu, Surendra Singh, and Luis Bonilla. Kumar and Singh are also physics professors with the University of Arkansas, Neu with the University of California, Berkeley, and Bonilla with Universidad Carlos III de Madrid.
Representation of Nonlinear Thermal Current. Credit: Ben Goodwin
A decade of inquiry
The study represents the solution to a problem Thibado has been studying for well over a decade, when he and Kumar first tracked the dynamic movement of ripples in freestanding graphene at the atomic level. Discovered in 2004, graphene is a one-atom-thick sheet of graphite. The duo observed that freestanding graphene has a rippled structure, with each ripple flipping up and down in response to the ambient temperature.
“The thinner something is, the more flexible it is,” Thibado said. “And at only one atom thick, there is nothing more flexible. It’s like a trampoline, constantly moving up and down. If you want to stop it from moving, you have to cool it down to 20 Kelvin.”
His current efforts in the development of this technology are focused on building a device he calls a Graphene Energy Harvester (or GEH). GEH uses a negatively charged sheet of graphene suspended between two metal electrodes.
When the graphene flips up, it induces a positive charge in the top electrode. When it flips down, it positively charges the bottom electrode, creating an alternating current. With diodes wired in opposition, allowing the current to flow both ways, separate paths are provided through the circuit, producing a pulsing DC current that performs work on a load resistor.
Commercial applications
NTS Innovations, a company specializing in nanotechnology, owns the exclusive license to develop GEH into commercial products. Because GEH circuits are so small, mere nanometers in size, they are ideal for mass duplication on silicon chips. When multiple GEH circuits are embedded on a chip in arrays, more power can be produced. They can also operate in many environments, making them particularly attractive for wireless sensors in locations where changing batteries is inconvenient or expensive, such as an underground pipe system or interior aircraft cable ducts.
[…]
“I think people were afraid of the topic a bit because of Feynman. So, everybody just said, ‘I’m not touching that.’ But the question just kept demanding our attention. Honestly, its solution was only found through the perseverance and diverse approaches of our unique team.”
More information: P. M. Thibado et al, Charging capacitors from thermal fluctuations using diodes, Physical Review E (2023). DOI: 10.1103/PhysRevE.108.024130
[…] Knowing the wave function of such a quantum system is a challenging task—this is also known as quantum state tomography or quantum tomography in short. With the standard approaches (based on the so-called projective operations), a full tomography requires large number of measurements that rapidly increases with the system’s complexity (dimensionality).
Previous experiments conducted with this approach by the research group showed that characterizing or measuring the high-dimensional quantum state of two entangled photons can take hours or even days. Moreover, the result’s quality is highly sensitive to noise and depends on the complexity of the experimental setup.
The projective measurement approach to quantum tomography can be thought of as looking at the shadows of a high-dimensional object projected on different walls from independent directions. All a researcher can see is the shadows, and from them, they can infer the shape (state) of the full object. For instance, in CT scan (computed tomography scan), the information of a 3D object can thus be reconstructed from a set of 2D images.
In classical optics, however, there is another way to reconstruct a 3D object. This is called digital holography, and is based on recording a single image, called interferogram, obtained by interfering the light scattered by the object with a reference light.
The team, led byEbrahim Karimi, Canada Research Chair in Structured Quantum Waves, co-director of uOttawa Nexus for Quantum Technologies (NexQT) research institute and associate professor in the Faculty of Science, extended this concept to the case of two photons.
Reconstructing a biphoton state requires superimposing it with a presumably well-known quantum state, and then analyzing the spatial distribution of the positions where two photons arrive simultaneously. Imaging the simultaneous arrival of two photons is known as a coincidence image. These photons may come from the reference source or the unknown source. Quantum mechanics states that the source of the photons cannot be identified.
This results in an interference pattern that can be used to reconstruct the unknown wave function. This experiment was made possible by an advanced camera that records events with nanosecond resolution on each pixel.
Dr. Alessio D’Errico, a postdoctoral fellow at the University of Ottawa and one of the co-authors of the paper, highlighted the immense advantages of this innovative approach, “This method is exponentially faster than previous techniques, requiring only minutes or seconds instead of days. Importantly, the detection time is not influenced by the system’s complexity—a solution to the long-standing scalability challenge in projective tomography.”
The impact of this research goes beyond just the academic community. It has the potential to accelerate quantum technology advancements, such as improving quantum state characterization, quantum communication, and developing new quantum imaging techniques.
The study “Interferometric imaging of amplitude and phase of spatial biphoton states” was published in Nature Photonics.
More information: Danilo Zia et al, Interferometric imaging of amplitude and phase of spatial biphoton states, Nature Photonics (2023). DOI: 10.1038/s41566-023-01272-3