More and more, as the video game industry matures, we find ourselves talking about game preservation and the disappearing culture of some older games as the original publishers abandon them. Often times leaving the public with no actual legit method for purchasing these old games, copyright law conspires with the situation to also prevent the public itself from clawing back its half of the copyright bargain. The end results are studios and publishers that have enjoyed the fruits of copyright law for a period of time, only for that cultural output to be withheld from the public later on. By any plain reading of American copyright law, that outcome shouldn’t be acceptable.
When it comes to one classic PlayStation 1 title, it seems that one enterprising individual has very much refused to accept this outcome. A fan of the first-party Sony title WipeOut, an exclusive to the PS1, has ported the game such that it can be played in a web browser. And, just to drive the point home, they have essentially dared Sony to do something about it.
“Either let it be, or shut this thing down and get a real remaster going,” he told Sony in a recent blog post (via VGC). Despite the release of the PlayStation Classic, 2017’s Wipeout Omega Collection, and PS Plus adding old PS1 games to PS5 like Twisted Metal, there’s no way to play the original WipeOut on modern consoles and experience the futuristic racer’s incredible soundtrack and neo-Tokyo aesthetic in all their glory. So fans have taken it upon themselves to make the Psygnosis-developed hit accessible on PC.
As Dominic Szablewski details in his post and in a series of videos detailing this labor of love, getting this all to work took a great deal of unraveling in the source code. The whole thing was a mess primarily because every iteration of the game simply had new code layered on top of the last iteration, meaning that there was a lot of onion-peeling to be done to make this all work.
But work it does!
After a lot of detective work and elbow grease, Szablewski managed to resurrect a modified playable version of the game with an uncapped framerate that looks crisp and sounds great. He still recommends two other existing PC ports over his own, WipeOut Phantom Edition and an unnamed project by a user named XProger. However, those don’t come with the original source code, the legality of which he admits is “questionable at best.”
But again, what is the public supposed to do here? The original game simply can’t be bought legitimately and hasn’t been available for some time. Violating copyright law certainly isn’t the right answer, but neither is allowing a publisher to let cultural output go to rot simply because it doesn’t want to do anything about it.
“Sony has demonstrated a lack of interest in the original WipeOut in the past, so my money is on their continuing absence,” Szablewski wrote. “If anyone at Sony is reading this, please consider that you have (in my opinion) two equally good options: either let it be, or shut this thing down and get a real remaster going. I’d love to help!”
Sadly, I’m fairly certain I know how this story will end.
The Mozilla Foundation has started a petition to stop the French government from forcing browsers like Mozilla’s Firefox to censor websites. “It would set a dangerous precedent, providing a playbook for other governments to also turn browsers like Firefox into censorship tools,” says the organization. “The government introduced the bill to parliament shortly before the summer break and is hoping to pass this as quickly and smoothly as possible; the bill has even been put on an accelerated procedure, with a vote to take place this fall.” You can add your name to their petition here.
The bill in question is France’s SREN Bill, which sets a precarious standard for digital freedoms by empowering the government to compile a list of websites to be blocked at the browser level. The Mozilla Foundation warns that this approach “is uncharted territory” and could give oppressive regimes an operational model that could undermine the effectiveness of censorship circumvention tools.
“Rather than mandate browser based blocking, we think the legislation should focus on improving the existing mechanisms already utilized by browsers — services such as Safe Browsing and Smart Screen,” says Mozilla. “The law should instead focus on establishing clear yet reasonable timelines under which major phishing protection systems should handle legitimate website inclusion requests from authorized government agencies. All such requests for inclusion should be based on a robust set of public criteria limited to phishing/scam websites, subject to independent review from experts, and contain judicial appellate mechanisms in case an inclusion request is rejected by a provider.”
On Friday, the Internet Archive put up a blog post noting that its digital book lending program was likely to change as it continues to fight the book publishers’ efforts to kill the Internet Archive. As you’ll recall, all the big book publishers teamed up to sue the Internet Archive over its Open Library project, which was created based on a detailed approach, backed by librarians and copyright lawyers, to recreate an online digital library that matches a physical library. Unfortunately, back in March, the judge decided (just days after oral arguments) that everything about the Open Library infringes on copyrights. There were many, many problems with this ruling, and the Archive is appealing.
However, in the meantime, the judge in the district court needed to sort out the details of the injunction in terms of what activities the Archive would change during the appeal. The Internet Archive and the publishers negotiated over the terms of such an injunction and asked the court to weigh in on whether or not it also covers books for which there are no ebooks available at all. The Archive said it should only cover books where the publishers make an ebook available, while the publishers said it should cover all books, because of course they did. Given Judge Koeltl’s original ruling, I expected him to side with the publishers, and effectively shut down the Open Library. However, this morning he surprised me and sided with the Internet Archive, saying only books that are already available in electronic form need to be removed. That’s still a lot, but at least it means people can still access those other works electronically. The judge rightly noted that the injunction should be narrowly targeted towards the issues at play in the case, and thus it made sense to only block works available as ebooks.
But, also on Friday, the RIAA decided to step in and to try to kick the Internet Archive while it’s down. For years now, the Archive has offered up its Great 78 Project, in which the Archive, in coordination with some other library/archival projects (including the Archive of Contemporary Music and George Blood LP), has been digitizing whatever 78rpm records they could find.
78rpm records were some of the earliest musical recordings, and were produced from 1898 through the 1950s when they were replaced by 33 1/3rpm and 45rpm vinyl records. I remember that when I was growing up my grandparents had a record player that could still play 78s, and there were a few of those old 78s in a cabinet. Most of the 78s were not on vinyl, but shellac, and were fairly brittle, meaning that many old 78s are gone forever. As such there is tremendous value in preserving and protecting old 78s, which is also why many libraries have collections of them. It’s also why those various archival libraries decided to digitize and preserve them. Without such an effort, many of those 78s would disappear.
If you’ve ever gone through the Great78 project, you know quite well that it is, in no way, a substitute for music streaming services like Spotify or Apple Music. You get a static page in which you (1) see a photograph of the original 78 label, (2) get some information on that recording, and (3) are able to listen to and download just that song. Here’s a random example I pulled:
Also, when you listen to it, you can clearly hear that this was digitized straight off of the 78 itself, including all the crackle and hissing of the record. It is nothing like the carefully remastered versions you hear on music streaming services.
Indeed, I’ve used the Great78 Project to discover old songs I’d never heard before, leading me to search out those artists on Spotify to add to my playlists, meaning that for me, personally, the Great78 Project has almost certainly resulted in the big record labels making more money, as it added more artists for me to listen to through licensed systems.
It’s no secret that the recording industry had it out for the Great78 Project. Three years ago, we wrote about how Senator Thom Tillis (who has spent his tenure in the Senate pushing for whatever the legacy copyright industries want) seemed absolutely apoplectic when the Internet Archive bought a famous old record store in order to get access to the 78s to digitize, and Tillis thought that this attempt to preserve culture was shameful.
The lawsuit, joined by all of the big RIAA record labels, was filed by one of the RIAA’s favorite lawyers for destroying anything good that expands access to music: Matt Oppenheim. Matt was at the RIAA and helped destroy both Napster and Grokster. He was also the lawyer who helped create some terrible precedents holding ISPs liable for subscribers who download music, enabling even greater copyright trolling. Basically, if you’ve seen anything cool and innovative in the world of music over the last two decades, Oppenheim has been there to kill it.
And now he’s trying to kill the world’s greatest library.
Much of the actual lawsuit revolves around the Music Modernization Act, which was passed in 2018 and had some good parts in it, in particular in moving some pre-1972 sound recordings into the public domain. As you might also recall, prior to February of 1972, sound recordings did not get federal copyright protection (though they might get some form of state copyright). Indeed, in most of the first half of the 20th century, many copyright experts believed that federal copyright could not apply to sound recordings and that it could only apply to the composition. After February of 1972, sound recordings were granted federal copyright, but that left pre-1972 works in a weird state, in which they were often protected by an amalgamation of obsolete state laws, meaning that some works might not reach the public domain for well over a century. This was leading to real concerns that some of our earliest recordings would disappear forever.
The Music Modernization Act sought to deal with some of that, creating a process by which pre-1972 sound recordings would be shifted under federal copyright, and a clear process began to move some of the oldest ones into the public domain. It also created a process for dealing with old orphaned works, where the copyright holder could not be found. The Internet Archive celebrated all of this, and noted that it would be useful for some of its archival efforts.
The lawsuit accuses the Archive (and Brewster Kahle directly) of then ignoring the limitations and procedures in the Music Modernization Act to just continue digitizing and releasing all of the 78s it could find, including those by some well known artists whose works are available on streaming platforms and elsewhere. It also whines that the Archive often posts links to newly digitized Great78 records on ex-Twitter.
When the Music Modernization Act’s enactment made clear that unauthorized copying, streaming, and distributing pre-1972 sound recordings is infringing, Internet Archive made no changes to its activities. Internet Archive did not obtain authorization to use the recordings on the Great 78 Project website. It did not remove any recordings from public access. It did not slow the pace at which it made new recordings publicly available. It did not change its policies regarding which recordings it would make publicly available.
Internet Archive has not filed any notices of non-commercial use with the Copyright Office. Accordingly, the safe harbor set forth in the Music Modernization Act is not applicable to Internet Archive’s activities.
Internet Archive knew full well that the Music Modernization Act had made its activities illegal under Federal law. When the Music Modernization Act went into effect, Internet Archive posted about it on its blog. Jeff Kaplan, The Music Modernization Act is now law which means some pre-1972 music goes public, INTERNET ARCHIVE (Oct. 15, 2018), https://blog.archive.org/2018/10/15/the-music-modernization-act-is-now-law-which-means-some-music-goes-public/. The blog post stated that “the MMA means that libraries can make some of these older recordings freely available to the public as long as we do a reasonable search to determine that they are not commercially available.” Id. (emphasis added). The blog post further noted that the MMA “expands an obscure provision of the library exception to US Copyright Law, Section 108(h), to apply to all pre-72 recordings. Unfortunately 108(h) is notoriously hard to implement.” Id. (emphasis added). Brewster Kahle tweeted a link to the blog post. Brewster Kahle (@brewster_kahle), TWITTER (Oct. 15, 2018 11:26 AM), https://twitter.com/brewster_kahle/status/1051856787312271361.
Kahle delivered a presentation at the Association for Recorded Sound Collection’s 2019 annual conference titled, “Music Modernization Act 2018. How it did not go wrong, and even went pretty right.” In the presentation, Kahle stated that, “We Get pre-1972 out-of-print to be ‘Library Public Domain’!”. The presentation shows that Kahle, and, by extension, Internet Archive and the Foundation, understood how the Music Modernization Act had changed federal law and was aware the Music Modernization Act had made it unlawful under federal law to reproduce, distribute, and publicly perform pre-1972 sound recordings.
Despite knowing that the Music Modernization Act made its conduct infringing under federal law, Internet Archive ignored the new law and plowed forward as if the Music Modernization Act had never been enacted.
There’s a lot in the complaint that you can read. It attacks Brewster Kahle personally, falsely claiming that Kahle “advocated against the copyright laws for years,” rather than the more accurate statement that Kahle has advocated against problematic copyright laws that lock down, hide, and destroy culture. The lawsuit even uses Kahle’s important, though unfortunately failed, Kahle v. Gonzalez lawsuit, which argued (compellingly, though unfortunately not to the 9th Circuit) that when Congress changed copyright law from opt-in copyright (in which you had to register anything to get a copyright) to “everything is automatically covered by copyright,” it changed the very nature of copyright law, and took it beyond the limits required under the Constitution. That was not an “anti-copyright” lawsuit. It was an “anti-massive expansion of copyright in a manner that harms culture” lawsuit.
It is entirely possible (perhaps even likely) that the RIAA will win this lawsuit. As Oppenheim knows well, the courts are often quite smitten with the idea that the giant record labels and publishers and movie studios “own” culture and can limit how the public experiences it.
But all this really does is demonstrate exactly how broken modern copyright law is. There is no sensible or rationale world in which an effort to preserve culture and make it available to people should be deemed a violation of the law. Especially when that culture is mostly works that the record labels themselves ignored for decades, allowing them to decay and disappear in many instances. To come back now, decades later, and try to kill off library preservation and archival efforts is just an insult to the way culture works.
It’s doubly stupid given that the RIAA, and Oppenheim in particular, spent years trying to block music from ever being available on the internet. It’s only now that the very internet they fought developed systems that have re-invigorated the bank accounts of the labels through streaming that the RIAA gets to pretend that of course it cares about music from the first half of the 20th century — music that it was happy to let decay and die off until just recently.
Whether or not the case is legally sound is one thing. Chances are the labels may win. But, on a moral level, everything about this is despicable. The Great78 project isn’t taking a dime away from artists or the labels. No one is listening to the those recordings as a replacement for licensed efforts. Again, if anything, it’s helping to rejuvenate interest in those old recordings for free.
And if this lawsuit succeeds, it could very well put the nail in the coffin of the Internet Archive, which is already in trouble due to the publishers’ lawsuit.
Over the last few years, the RIAA had sort of taken a step back from being the internet’s villain, but its instincts to kill off and spit on culture never went away.
These copyright goons really hate the idea of preserving culture. Can you imagine doing something once and then getting paid for it every time someone sees your work?! Crazy!
A Wednesday statement from the Commission brought news that in late July it wrote to Google to inform it of the ₩42.1 billion ($31.5 million) fine announced, and reported by The Register, in April 2023.
The Commission has also commenced monitoring activities to ensure that Google complies with requirements to allow competition with its Play store.
South Korea probed the operation of Play after a rival local Android app-mart named OneStore debuted in 2016.
OneStore had decent prospects of success because it merged app stores operated by South Korea’s top three telcos. Naver, an online portal similar in many ways to Google, also rolled its app store into OneStore.
Soon afterwards, Google told developers they were free to sell their wares in OneStore – but doing so would see them removed from the Play store.
Google also offered South Korean developers export assistance if they signed exclusivity deals in their home country.
Faced with the choice of being cut off from the larger markets Google owned, developer enthusiasm for dabbling in OneStore dwindled. Some popular games never made it into OneStore, so even though its founders had tens of millions of customers between them, the venture struggled.
Which is why Korea’s Fair Trade Commission intervened with an investigation, the fines mentioned above, and a requirement that Google revisit agreements with local developers.
Google has also been required to establish an internal monitoring system to ensure it complies with the Commission’s orders.
Commission chair Ki-Jeong Han used strong language in today’s announcement, describing his agency’s actions as “putting the brakes” on Google’s efforts to achieve global app store dominance.
“Monopolization of the app market may adversely affect the entire mobile ecosystem,” the Commissioner’s statement reads, adding “The recovery of competition in this market is very important.”
It’s also likely beneficial to South Korean companies. OneStore has tried to expand overseas, and Samsung – the world’s top smartphone vendor by unit volume – also stands to gain. It operates its own Galaxy Store that, despite its presence on hundreds of millions of handsets, enjoys trivial market share.
HP has failed to shunt aside class-action legal claims that it disables the scanners on its multifunction printers when their ink runs low. Though not for lack of trying.
On Aug. 10, a federal judge ruled that HP Inc. must face a class-action lawsuit claiming that the company designs its “all-in-one” inkjet printers to disable scanning and faxing functions whenever a single printer ink cartridge runs low. The company had sought — for the second time — to dismiss the lawsuit on technical legal grounds.
“It is well-documented that ink is not required in order to scan or to fax a document, and it is certainly possible to manufacture an all-in-one printer that scans or faxes when the device is out of ink,” the plaintiffs wrote in their complaint. “Indeed, HP designs its all-in-one printer products so they will not work without ink. Yet HP does not disclose this fact to consumers.”
The lawsuit charges that HP deliberately withholds this information from consumers to boost profits from the sale of expensive ink cartridges.
Color printers require four ink cartridges — one black and a set of three cartridges in cyan, magenta and yellow for producing colors. Some will also refuse to print if one of the color cartridges is low, even in black-and-white mode.
[…]
Worse, a significant amount of ink is never actually used to print documents because it’s consumed by printer maintenance cycles. In 2018, Consumer Reports tested hundreds of all-in-one inkjet printers and found that, when used intermittently, many models delivered less than half of their ink to printed documents. A few managed no more than 20% to 30%.
A few months ago, an engineer in a data center in Norway encountered some perplexing errors that caused a Windows server to suddenly reset its system clock to 55 days in the future. The engineer relied on the server to maintain a routing table that tracked cell phone numbers in real time as they moved from one carrier to the other. A jump of eight weeks had dire consequences because it caused numbers that had yet to be transferred to be listed as having already been moved and numbers that had already been transferred to be reported as pending.
[…]
The culprit was a little-known feature in Windows known as Secure Time Seeding. Microsoft introduced the time-keeping feature in 2016 as a way to ensure that system clocks were accurate. Windows systems with clocks set to the wrong time can cause disastrous errors when they can’t properly parse timestamps in digital certificates or they execute jobs too early, too late, or out of the prescribed order. Secure Time Seeding, Microsoft said, was a hedge against failures in the battery-powered onboard devices designed to keep accurate time even when the machine is powered down.
[…]
ometime last year, a separate engineer named Ken began seeing similar time drifts. They were limited to two or three servers and occurred every few months. Sometimes, the clock times jumped by a matter of weeks. Other times, the times changed to as late as the year 2159.
“It has exponentially grown to be more and more servers that are affected by this,” Ken wrote in an email. “In total, we have around 20 servers (VMs) that have experienced this, out of 5,000. So it’s not a huge amount, but it is considerable, especially considering the damage this does. It usually happens to database servers. When a database server jumps in time, it wreaks havoc, and the backup won’t run, either, as long as the server has such a huge offset in time. For our customers, this is crucial.”
Simen and Ken, who both asked to be identified only by their first names because they weren’t authorized by their employers to speak on the record, soon found that engineers and administrators had been reporting the same time resets since 2016.
[…]
“At this point, we are not completely sure why secure time seeding is doing this,” Ken wrote in an email. “Being so seemingly random, it’s difficult to [understand]. Microsoft hasn’t really been helpful in trying to track this, either. I’ve sent over logs and information, but they haven’t really followed this up. They seem more interested in closing the case.”
The logs Ken sent looked like the ones shown in the two screenshots below. They captured the system events that occurred immediately before and after the STS changed the times. The selected line in the first image shows the bounds of what STS calculates as the correct time based on data from SSL handshakes and the heuristics used to corroborate it.
Screenshot of a system event log as STS causes a system clock to jump to a date four months later than the current time.
Ken
Screenshot of a system event log when STS resets the system date to a few weeks later than the current date.
Ken
The “Projected Secure Time” entry immediately above the selected line shows that Windows estimates the current date to be October 20, 2023, more than four months later than the time shown in the system clock. STS then changes the system clock to match the incorrectly projected secure time, as shown in the “Target system time.”
The second image shows a similar scenario in which STS changes the date from June 10, 2023, to July 5, 2023.
[…]
As the creator and lead developer of the Metasploit exploit framework, a penetration tester, and a chief security officer, Moore has a deep background in security. He speculated that it might be possible for malicious actors to exploit STS to breach Windows systems that don’t have STS turned off. One possible exploit would work with an attack technique known as Server Side Request Forgery.
Microsoft’s repeated refusal to engage with customers experiencing these problems means that for the foreseeable future, Windows will by default continue to automatically reset system clocks based on values that remote third parties include in SSL handshakes. Further, it means that it will be incumbent on individual admins to manually turn off STS when it causes problems.
That, in turn, is likely to keep fueling criticism that the feature as it has existed for the past seven years does more harm than good.
STS “is more like malware than an actual feature,” Simen wrote. “I’m amazed that the developers didn’t see it, that QA didn’t see it, and that they even wrote about it publicly without anyone raising a red flag. And that nobody at Microsoft has acted when being made aware of it.”
Third-party merchants on Amazon who ship their own packages will see an additional fee for each product sold starting on Oct. 1st. Sellers could previously choose to ship their products without contributing to Amazon, but the new feemeans members of Amazon’s Seller Fulfilled Prime program will be required to pay the company 2% on each product sold.
The new surcharge is in addition to other payments Amazon receives from merchants starting with the selling plan which costs $0.99 for each product sold or $39.99 per month for an unlimited number of sales. The company also charges a referral fee for each item sold, with most ranging between 8% and 15% depending on the product category.
Since the program launched in 2015, merchants could independently ship their products without paying a fee to Amazon but the new shipping charge may add pressure to switch to the company’s in-house service. As it stands, sellers can already incur other additional charges including fees for stocking inventory, rental book service, high-volume listings, and a refund administration fee, although Amazon does not list the costs on its website.
This is a problem where Amazon is using it’s position to create a logistics monopoly and putting other logistics firms out of business. Amazon should stick to being a marketplace and this should be enforced by government.
Scientists have trained a computer to analyze the brain activity of someone listening to music and, based only on those neuronal patterns, recreate the song. The research, published on Tuesday, produced a recognizable, if muffled version of Pink Floyd’s 1979 song, “Another Brick in the Wall (Part 1).” […] To collect the data for the study, the researchers recorded from the brains of 29 epilepsy patients at Albany Medical Center in New York State from 2009 to 2015. As part of their epilepsy treatment, the patients had a net of nail-like electrodes implanted in their brains. This created a rare opportunity for the neuroscientists to record from their brain activity while they listened to music. The team chose the Pink Floyd song partly because older patients liked it. “If they said, ‘I can’t listen to this garbage,'” then the data would have been terrible, Dr. Schalk said. Plus, the song features 41 seconds of lyrics and two-and-a-half minutes of moody instrumentals, a combination that was useful for teasing out how the brain processes words versus melody.
Robert Knight, a neuroscientist at the University of California, Berkeley, and the leader of the team, asked one of his postdoctoral fellows, Ludovic Bellier, to try to use the data set to reconstruct the music “because he was in a band,” Dr. Knight said. The lab had already done similar work reconstructing words. By analyzing data from every patient, Dr. Bellier identified what parts of the brain lit up during the song and what frequencies these areas were reacting to. Much like how the resolution of an image depends on its number of pixels, the quality of an audio recording depends on the number of frequencies it can represent. To legibly reconstruct “Another Brick in the Wall,” the researchers used 128 frequency bands. That meant training 128 computer models, which collectively brought the song into focus. The researchers then ran the output from four individual brains through the model. The resulting recreations were all recognizably the Pink Floyd song but had noticeable differences. Patient electrode placement probably explains most of the variance, the researchers said, but personal characteristics, like whether a person was a musician, also matter.
The data captured fine-grained patterns from individual clusters of brain cells. But the approach was also limited: Scientists could see brain activity only where doctors had placed electrodes to search for seizures. That’s part of why the recreated songs sound like they are being played underwater. […] The researchers also found a spot in the brain’s temporal lobe that reacted when volunteers heard the 16th notes of the song’s guitar groove. They proposed that this particular area might be involved in our perception of rhythm. The findings offer a first step toward creating more expressive devices to assist people who can’t speak. Over the past few years, scientists have made major breakthroughs in extracting words from the electrical signals produced by the brains of people with muscle paralysis when they attempt to speak.
On Tuesday, Snapchat’s My AI in-app chatbot posted its own Story to the app that appeared to be a photo of a wall and ceiling. It then stopped responding to users’ messages, which some Snapchat users founddisconcerting. TechCrunch reports: Though the incident made for some great tweets (er, posts), we regret to inform you that My AI did not develop self-awareness and a desire to express itself through Snapchat Stories. Instead, the situation arose because of a technical outage, just as the bot explained. Snap confirmed the issue, which was quickly addressed last night, was just a glitch. (And My AI wasn’t snapping photos of your room, by the way). “My AI experienced a temporary outage that’s now resolved,” a spokesperson told TechCrunch.
However, the incident does raise the question as to whether or not Snap was considering adding new functionality to My AI that would allow the AI chatbot to post to Stories. Currently, the AI bot sends text messages and can even Snap you back with images — weird as they may be. But does it do Stories? Not yet, apparently. “At this time, My AI does not have Stories feature,” a Snap spokesperson told us, leaving us to wonder if that may be something Snap has in the works.
Tesla has added a new Standard Range trim for both its aging Model S and Model X luxury cars this week, effectively slashing the barrier to entry for the automaker’s flagship sedan and SUV by a staggering $10,000 each. The Model S SR now comes in at $78,490, and the Model X SR at $88,490—both before the automaker’s mandatory $1,390 destination and $250 order fees.
As the name suggests, the $10,000 trade-off is how far the vehicle can travel on a charge. Model S gets an 85-mile reduction to 320 miles (down from 405 miles) and Model X shaves off 79 miles from its range, resulting in 269 miles to a charge (down from 348 miles). There’s just one catch that might rankle new SR owners: all Model S and X vehicles reportedly use the same gross capacity battery pack regardless of trim. In other words, the Standard Range variants have been software locked at a lower usable capacity to justify the price difference.
Software locking a battery pack at a lower usable capacity is an old trick Tesla pulled from its sleeve that was previously used to limit early Model S cars to 60 kWh, down from 75 kWh. With these new configurations, the EV maker has also slowed the zero to 60 MPH sprint from 3.1 to 3.7 seconds in the Model S and from 3.8 to 4.4 seconds in the Model X.
[…]
Whether Tesla will let owners to “unlock” the remainder of the car’s battery as an over-the-air purchase later on is currently unclear. Tesla previously allowed owners of early Model S 60D vehicles to pay $4,500 to access an additional 15 kWh of usable battery (it later reduced the price to $2,000), whereas Model X owners have paid as much as $9,000 for the same privilege in the past.
BMW and Mercedes are also locking features that you already paid for – because you own the hardware of the car – behind paywalls. It’s something that really these companies shouldn’t be allowed to get away with.
The U.S. Air Force says it has picked aviation startup JetZero to design and build a full-size demonstrator aircraft with a blended wing body, or BWB, configuration. The goal is for the aircraft, which has already received the informal moniker XBW-1, to be flying by 2027.
Secretary of the Air Force Frank Kendall made the announcement about JetZero‘s selection at an event today hosted by the Air & Space Forces Association. The service hopes this initiative will offer a pathway to future aerial refueling tankers and cargo aircraft that are significantly more fuel efficient than existing types with more traditional planforms. They can also possess even heavier lifting abilities with large amounts of internal volume, among other advantages. In this way, it could help inform requirements for the Next-Generation Air Refueling System (NGAS) and Next-Generation Airlift (NGAL) programs, which the Air Force is still in the process of refining.
“Blended wing body aircraft have the potential to significantly reduce fuel demand and increase global reach,” Secretary Kendall said in a statement in a separate press release. “Moving forces and cargo quickly, efficiently, and over long distance[s] is a critical capability to enable national security strategy.”
A rendering that JetZero previously released showing its BWB concept. JetZero
The service’s Office of Energy, Installations, and Environment, is leading this initiative in cooperation with the Department of Defense’s Defense Innovation Unit (DIU). DIU is tasked with “accelerating the adoption of leading commercial technology throughout the military,” according to its website. Secretary Kendall said that NASA has also made important contributions to the effort.
“As outlined in the fiscal year 2023 National Defense Authorization Act, the Department of Defense plans to invest $235 million over the next four years to fast-track the development of this transformational dual-use technology, with additional private investment expected,” according to the Air Force’s press release. Additional funding will come from other streams, as well.
The Air Force and DIU have been considering bids for more than a year and by last month had reportedly narrowed the field down to just two competitors. JetZero is the only company to have previously publicly confirmed it was proposing a design, which it calls the Z-5, for the new BWB initiative. The company has partnered with Northrop Grumman on this project. Scaled Composites, a wholly-owned Northrop Grumman subsidiary that is well known for its bleeding-edge aerospace design and rapid prototyping capabilities, will specifically be supporting this work.
A rendering of JetZero’s BWB concept configured as a tanker, with F-35A Joint Strike Fighters flying in formation and receiving fuel. JetZero
A formal request for information issued last year outlined the main goals of the BWB project as centering on a design that would be at least 30 percent more aerodynamically efficient than a Boeing 767 or an Airbus A330. These two commercial airliners are notably the basis for the Boeing KC-46A Pegasus tanker (which has a secondary cargo-carrying capability), dozens of which are in Air Force service now, and the Airbus A330 Multi-Role Tanker Transport (MRTT).
A US Air Force KC-46A Pegasus tanker. USAF
The hope is that the BWB design, combined with unspecified advanced engine technology, could lead to substantially increased fuel efficiency. This, in turn, could allow future Air Force tankers and cargo aircraft based on the core design concept to fly further while carrying similar or even potentially greater payloads than are possible with the service’s current fleets.
“Several military transport configurations are possible with the BWB,” the Air Force’s press release notes. “Together, these aircraft types account for approximately 60% of the Air Force’s total annual jet fuel consumption.”
“We see benefits in both air refueling at range where you can get much more productivity—much more fuel delivered—as well as cargo,” Deputy Assistant Secretary of the Air Force for Operational Energy had also said during a presentation at the Global Air and Space Chiefs Conference in London in July.
[…]
A rendering of a past BWB design concept from Boeing. Boeing
[…]
Looking at the latest rendering, one thing that has immediately stood out to us is the potential signature management benefits of the design. Beyond having no vertical tail and the general blended body planform, which can already offer radar cross-section advantages, the top-mounted engines positioned at the rear of the fuselage are shielded from most aspects below. This could have major beneficial impacts on the aircraft’s infrared signature, as well as how it appears on radar under many circumstances.
A close-up of the rear end of the latest rendering of JetZero’s blended wing body design concept. USAF
JetZero has previously highlighted how the engine configuration directs sound waves upward, which the company says will reduce its noise signature while in flight, at least as perceived below. This has been touted as beneficial for commercial applications, where noise pollution could be a major issue, but could be useful for versions configured for military roles, as well. A quieter military transport aircraft, for instance, would be advantageous for covert or clandestine missions.
A screen capture from a part of JetZero’s website discussing the noise signature benefits of its blended wing body design. JetZero
The latest rendering for JetZero’s concept also shows passenger windows and doors along the side of the forward fuselage, highlighting its potential use for transporting personnel, as well as cargo. The company is already pitching the core design as a potential high-efficiency mid-market commercial airliner with a 230 to 250-passenger capacity and significant range in addition to military roles.
A close up of the front end of JetZero’s blended wing body design concept from the latest rendering showing the passenger windows and doors along the side. USAF
[…]
A blended wing body concept from the late 1980s credited to McDonnell-Douglas’ engineer Robert Liebeck. Liebeck is among those now working for JetZero. NASA via AviationWeek
“You’re looking at something with roughly a 50% greater efficiency here, right? So,… first order you’re talking about doubling the ranges or possibly doubling the payloads,” Tom Jones, Northrop Grumman Vice President and president of the company’s aeronautics sector, who was also present at today’s event, added. “Additionally, the folded wing type of design gives you a smaller spot factor so you can fit… more aircraft at potentially a remote location. And the aircraft is also capable of some degree of short takeoff [and] landing type things…”
A screen capture from a JetZero promotional video showing project fuel savings for its blended wing body design depending on configuration compared to aircraft with more traditional designs. JetZero capture
“Having a lifting body is a great way to get off the ground quicker,” JetZero’s O’Leary also noted with regard to shorter takeoff and landing capabilities.
These performance improvements could have a number of significant operational benefits for the Air Force when it comes to future tanker and cargo aircraft.
Being able to operate from “shorter runways, [across] longer distances, [with] better efficiency to carry the same payload and get it to places” are all of interest to the Air Force, Maj. Gen. Albert Miller, the Director of Strategy, Plans, Requirements, and Programs at Air Mobility Command, explained.
[…]
Maj. Gen. Miller also stressed that the BWB demonstrator would not necessarily directly meet the Air Force’s demands for future tankers or airlifters. He did add that the design would definitely help inform those requirements and could still be a solution to the operational issues he had highlighted in regard to a future major conflict in the Pacific region.
[…]
A rendering of JetZero’s blended wing body design concept configured as a tanker refueling a notional future stealthy combat jet. Stealthy drones are also seen flying in formation with the crewed aircraft. JetZero
“Why now? Because there’s no time to wait,” Dr. Ravi Chaudhary, Assistant Secretary of the Air Force for Energy, Installations, and Environment, who also happens to be a retired Air Force officer who flew C-17A Globemaster III cargo planes, said at today’s event. “And all of you have recognized that we’ve entered a new era of great power competition in which the PRC [People’s Republic of China] has come to be known as our pacing challenge.”
[…]
“We’re in a race for technological superiority with what we call a pacing challenge, a formidable opponent [China], and that requires us to find new ways, new methods, and new processes to get the kind of advantage that we’ve become used to and need to preserve,” Secretary Kendall had said in his opening remarks. “And that competitive advantage can be found in the ability to develop and field superior technology to meet our warfighter requirements and to do so faster than our adversaries. Today, that spirit of innovation continues with the Blended Wing Body Program and the demonstration project.”
Kendall added that the potential benefits for the commercial aviation sector offered valuable opportunities for further partnerships.
A rendering of a JetZero blended wing body airliner at a civilian airport. JetZero
[…]
As the project now gets truly underway, more information about the BWB initiative from the government and industry sides will likely emerge. From what we have seen and heard already, the program could have significant impacts on future military and commercial aviation developments.
The mysterious attacks began on July 11. “Strange beings,” locals said, visiting an isolated Indigenous community in rural Peru at night, harassing its inhabitants and attempting to kidnap a 15-year-old girl. […] News of the alleged extraterrestrial attackers quickly spread online as believers, skeptics, and internet sleuths around the world analyzed grainy videos posted by members of the Ikitu community. The reported sightings came on the heels of U.S. congressional hearings about unidentified aerial phenomenon that ignited a global conversation about the possibility of extraterrestrial life visiting Earth.
Members of the Peruvian Navy and Police traveled to the isolated community, which is located 10 hours by boat from the Maynas provincial capital of Iquitos, to investigate the strange disturbances in early August. Last week, authorities announced that they believed the perpetrators were members of illegal gold mining gangs from Colombia and Brazil using advanced flying technology to terrorize the community, according to RPP Noticias. Carlos Castro Quintanilla, the lead investigator in the case, said that 80 percent of illegal gold dredging in the region is located in the Nanay river basin, where the Ikitu community is located.
One of the key pieces to the investigation was related to the attempted kidnapping of a 15-year-old girl on July 29. Cristian Caleb Pacaya, a local teacher who witnessed the attack, said that they “were using state of the art technology, like thrusters that allow people to fly.” He said that after looking the devices up on Google, he believed that they were “jetpacks.” Authorities have not made any arrests related to the attacks, nor named the alleged assailants or their organization directly. However, the prosecutors office claimed that they had destroyed 110 dredging operations and 10 illegal mining camps in the area already in 2023.
[…] a new study published on Monday in the journal Nature Medicine. The gene therapy was tested on macaque monkeys over 12 months, revealing promising results.
[…]
At the beginning of the study, the monkeys were gradually given alcohol until an addiction was established. Then, they began self-regulating their own intake at an amount equating to roughly nine drinks per day for a human. The researchers separated the macaques into a control group and a separate group that received the gene therapy.
According to the study, the monkeys’ daily alcohol consumption increased over the first six months before an eight-week abstinence period was initiated. The gene therapy was applied by inserting two small holes in the macaques’ skulls, and researchers injected a gene that makes the glial-derived neurotrophic factor, or GDNF protein, which stimulates the amount of dopamine produced. Then the monkeys were given the option to drink water or alcohol for four weeks.
What researchers found astounded them. Just one round of gene therapy resulted in the test group reducing their drinking by 50% compared to the control group which didn’t receive therapy. Subsequent test periods used a four-week window of drinking and a four-week window of abstinence. With each round of therapy, researchers found the test group voluntarily consumed less alcohol after the abstinence period, and by the end of the 12-month study, that amount dropped by more than 90%.
[…]
However, researchers also found that the therapy could also influence other behaviors such as weight loss and water intake. The macaques in the test group drank less water compared to the control group and lost about 18% of their body weight
Apple’s “Batterygate” legal saga is finally swinging shut – in the US, at least – with a final appeal being voluntarily dismissed, clearing the way for payouts to class members.
The US lawsuit, which combined 66 separate class actions into one big legal proceeding in California, was decided in 2020, with the outcome requiring Apple to pay out between $310 million and $500 million to claimants.
Some US claimants were unhappy with the outcome of the case, and appealed to the Ninth Circuit Court of Appeals. That appeal was finally dropped last week, allowing for payments to those who filed a claim before October 6, 2020, to begin. With around 3 million claims received, claimants will be due around $65 each.
“The settlement is the result of years of investigation and hotly contested litigation. We are extremely proud that this deal has been approved, and following the Ninth Circuit’s order, we can finally provide immediate cash payments to impacted Apple customers,” said Mark Molumphy, an attorney for plaintiffs in the case.
Apple didn’t respond to our questions.
A settlement nearly a decade in the making
For those who’ve chosen to forget about the whole Batterygate fiasco, it all started in 2016 when evidence began pointing to Apple throttling CPUs in older iPhones to prevent accelerated battery drain caused by newer software and loss of battery capacity in aging devices.
Devices affected by Apple’s CPU throttling include iPhone 6 and 7 series handsets as well as the first-generation iPhone SE.
Apple admitted as much in late 2017, and just a day later lawsuits began pouring in around the US from angry iDevice owners looking for recompense. Complaints continued into 2020 from users of older iPhones updated to iOS 14.2, who said their devices started overheating and the battery would drain in mere minutes.
The US case, as mentioned above, was decided in favor of the plaintiffs in 2020, though late last year the settlement was overturned by the Ninth Circuit, which said the lower court judge had applied the wrong legal standard in making his decision. The settlement was reinstated after a second examination earlier this year.
The reason for the objection and its withdrawal isn’t immediately clear. Lawyers for Sarah Feldman and Hondo Jan, who filed both objections to the settlement, didn’t immediately respond to questions from The Register.
Apple also won’t be completely off the hook for its iPhone throttling – it’s also facing a similar complaint in the UK, where a case was filed last year that Apple asked to have tossed in May. That attempt failed, and hearings in the case are scheduled for late August and early September.
The UK case, brought by consumer advocate Justin Gutmann, is seeking to recover £1.6 billion ($2 billion) from Apple if, like the US case, the courts end up deciding against Cook and co.
Virgin Galactic’s VSS Unity, the reusable rocket-powered space plane carrying the company’s first crew of tourists to space, successfully launched and landed on Thursday.
The mission, known as Galactic 02, took off shortly after 11am ET from Spaceport America in New Mexico.
Aboard the spacecraft were six individuals total – the space plane’s commander and former Nasa astronaut CJ Sturckow, the pilot Kelly Latimer, as well as Beth Moses, Virgin Galactic’s chief astronaut instructor who trained the crew before to the flight.
The spacecraft also carryied three private passengers, including the health and wellness coach Keisha Schahaff and her 18-year-old daughter, Anastasia Mayers, both of whom are Antiguan.
According to Space.com, Schahaff won her seat aboard the Galactic 02 as part of a fundraising competition by Space for Humanity, a non-profit organization seeking to democratize space travel. Mayers is studying philosophy and physics at Aberdeen University in Scotland. Together, Schahaff and Mayers are the first mother-daughter duo to venture to space together.
Were you hoping Canon might be held accountable for its all-in-one printers that mysteriously can’t scan when they’re low on ink, forcing you to buy more? Tough: the lawsuit we told you about last year quietly ended in a private settlement rather than becoming a big class-action.
I just checked, and a judge already dismissed David Leacraft’s lawsuit in November, without Canon ever being forced to show what happens when you try to scan without a full ink cartridge. (Numerous Canon customer support reps wrote that it simply doesn’t work.)
Here’s the good news: HP, an even larger and more shameless manufacturer of printers, is still possibly facing down a class-action suit for the same practice.
As Reuters reports, a judge has refused to dismiss a lawsuit by Gary Freund and Wayne McMath that alleges many HP printers won’t scan or fax documents when their ink cartridges report that they’ve run low.
[…]
Interestingly, neither Canon nor HP spent any time trying to argue their printers do scan when they’re low on ink in the lawsuit responses I’ve read. Perhaps they can’t deny it? Epson, meanwhile, has an entire FAQ dedicated to reassuring customers that it hasn’t pulled that trick since 2008. (Don’t worry, Epson has other forms of printer enshittification.)
Tech news website CNET has deleted thousands of old articles over the past few months in a bid to improve its performance in Google Search results, Gizmodo has learned.
Archived copies of CNET’s author pages show the company deleted small batches of articles prior to the second half of July, but then the pace increased. Thousands of articles disappeared in recent weeks. A CNET representative confirmed that the company was culling stories but declined to share exactly how many it has taken down.
[…]
Taylor Canada, CNET’s senior director of marketing and communications. “In an ideal world, we would leave all of our content on our site in perpetuity. Unfortunately, we are penalized by the modern internet for leaving all previously published content live on our site.”
[…]
CNET shared an internal memo about the practice. Removing, redirecting, or refreshing irrelevant or unhelpful URLs “sends a signal to Google that says CNET is fresh, relevant and worthy of being placed higher than our competitors in search results,” the document reads.
According to the memo about the “content pruning,” the company considers a number of factors before it “deprecates” an article, including SEO, the age and length of the story, traffic to the article, and how frequently Google crawls the page. The company says it weighs historical significance and other editorial factors before an article is taken down. When an article is slated for deletion, CNET says it maintains its own copy, and sends the story to the Internet Archive’s Wayback Machine.
[…]
Google does not recommend deleting articles just because they’re considered “older,” said Danny Sullivan, the company’s Public Liaison for Google Search. In fact, the practice is something Google has advised against for years. After Gizmodo’s request for comment, Sullivan posted a series of tweets on the subject.
“Are you deleting content from your site because you somehow believe Google doesn’t like ‘old’ content? That’s not a thing! Our guidance doesn’t encourage this,” Sullivan tweeted.
[…]
However, SEO experts told Gizmodo content pruning can be a useful strategy in some cases, but it’s an “advanced” practice that requires high levels of expertise,[…]
Ideally outdated pages should be updated or redirected to a more relevant URL, and deleting content without a redirect should be a last resort. With fewer irrelevant pages on your site, the idea is that Google’s algorithms will be able to index and better focus on the articles or pages a publisher does want to promote.
Google may have an incentive to withhold details about its Search algorithm, both because it would rather be able to make its own decisions about how to rank websites, and because content pruning is a delicate process that can cause problems for publishers—and for Google—if it’s mishandled.
[…]
Whether or not deleting articles is an effective business strategy, it causes other problems that have nothing to do with search engines. For a publisher like CNET — one of the oldest tech news sites on the internet — removing articles means losing parts of the public record that could have unforeseen historical significance in the future.
AMD processor users, you have another data-leaking vulnerability to deal with: like Zenbleed, this latest hole can be to steal sensitive data from a running vulnerable machine.
The flaw (CVE-2023-20569), dubbed Inception in reference to the Christopher Nolan flick about manipulating a person’s dreams to achieve a desired outcome in the real world, was disclosed by ETH Zurich academics this week.
And yes, it’s another speculative-execution-based side-channel that malware or a rogue logged-in user can abuse to obtain passwords, secrets, and other data that should be off limits.
Inception utilizes a previously disclosed vulnerability alongside a novel kind of transient execution attack, which the researchers refer to as training in transient execution (TTE), to leak information from an operating system kernel at a rate of 39 bytes per second on vulnerable hardware. In this case, vulnerable systems encompasses pretty much AMD’s entire CPU lineup going back to 2017, including its latest Zen 4 Epyc and Ryzen processors.
Despite the potentially massive blast radius, AMD is downplaying the threat while simultaneously rolling out microcode updates for newer Zen chips to mitigate the risk. “AMD believes this vulnerability is only potentially exploitable locally, such as via downloaded malware,” the biz said in a public disclosure, which ranks Inception “medium” in severity.
Intel processors weren’t found to be vulnerable to Inception, but that doesn’t mean they’re entirely in the clear. Chipzilla is grappling with its own separate side-channel attack disclosed this week called Downfall.
How Inception works
As we understand it, successful exploitation of Inception takes advantage of the fact that in order for modern CPUs to achieve the performance they do, processor cores have to cut corners.
Rather than executing instructions strictly in order, the CPU core attempts to predict which ones will be needed and runs those out of sequence if it can, a technique called speculative execution. If the core guesses incorrectly, it discards or unwinds the computations it shouldn’t have done. That allows the core to continue getting work done without having to wait around for earlier operations to complete. Executing these instructions speculatively is also known as transient execution, and when this happens, a transient window is opened.
Normally, this process renders substantial performance advantages, and refining this process is one of several ways CPU designers eke out instruction-per-clock gains generation after generation. However, as we’ve seen with previous side-channel attacks, like Meltdown and Spectre, speculative execution can be abused to make the core start leaking information it otherwise shouldn’t to observers on the same box.
Inception is a fresh twist on this attack vector, and involves two steps. The first takes advantage of a previously disclosed vulnerability called Phantom execution (CVE-2022-23825) which allows an unprivileged user to trigger a misprediction — basically making the core guess the path of execution incorrectly — to create a transient execution window on demand.
This window serves as a beachhead for a TTE attack. Instead of leaking information from the initial window, the TTE injects new mispredictions, which trigger more future transient windows. This, the researchers explain, causes an overflow in the return stack buffer with an attacker-controlled target.
“The result of this insight is Inception, an attack that leaks arbitrary data from an unprivileged process on all AMD Zen CPUs,” they wrote.
In a video published alongside the disclosure, and included below, the Swiss team demonstrate this attack by leaking the root account hash from /etc/shadow on a Zen 4-based Ryzen 7700X CPU with all Spectre mitigations enabled.
You can find a more thorough explanation of Inception, including the researchers’ methodology in a paper here [PDF]. It was written by Daniël Trujillo, Johannes Wikner, and Kaveh Razavi, of ETH Zurich. They’ve also shared proof-of-concept exploit code here.
researchers are now waking up to another factor, one that could be filed under the category of unintended consequences: disappearing clouds known as ship tracks. Regulations imposed in 2020 by the United Nations’s International Maritime Organization (IMO) have cut ships’ sulfur pollution by more than 80% and improved air quality worldwide. The reduction has also lessened the effect of sulfate particles in seeding and brightening the distinctive low-lying, reflective clouds that follow in the wake of ships and help cool the planet. The 2020 IMO rule “is a big natural experiment,” says Duncan Watson-Parris, an atmospheric physicist at the Scripps Institution of Oceanography. “We’re changing the clouds.”
By dramatically reducing the number of ship tracks, the planet has warmed up faster, several new studies have found. That trend is magnified in the Atlantic, where maritime traffic is particularly dense. In the shipping corridors, the increased light represents a 50% boost to the warming effect of human carbon emissions. It’s as if the world suddenly lost the cooling effect from a fairly large volcanic eruption each year, says Michael Diamond, an atmospheric scientist at Florida State University.
The natural experiment created by the IMO rules is providing a rare opportunity for climate scientists to study a geoengineering scheme in action—although it is one that is working in the wrong direction. Indeed, one such strategy to slow global warming, called marine cloud brightening, would see ships inject salt particles back into the air, to make clouds more reflective. In Diamond’s view, the dramatic decline in ship tracks is clear evidence that humanity could cool off the planet significantly by brightening the clouds. “It suggests pretty strongly that if you wanted to do it on purpose, you could,” he says.
The influence of pollution on clouds remains one of the largest sources of uncertainty in how quickly the world will warm up, says Franziska Glassmeier, an atmospheric scientist at the Delft University of Technology. Progress on understanding these complex interactions has been slow. “Clouds are so variable,” Glassmeier says.
Some of the basic science is fairly well understood. Sulfate or salt particles seed clouds by creating nuclei for vapor to condense into droplets. The seeds also brighten existing clouds by creating smaller, more numerous droplets. The changes don’t stop there, says Robert Wood, an atmospheric scientist at the University of Washington. He notes that smaller droplets are less likely to merge with others, potentially suppressing rainfall. That would increase the size of clouds and add to their brightening effect. But modeling also suggests that bigger clouds are more likely to mix with dry air, which would reduce their reflectivity.
I do understand why so many people, especially creative folks, are worried about AI and how it’s used. The future is quite unknown, and things are changing very rapidly, at a pace that can feel out of control. However, when concern and worry about new technologies and how they may impact things morphs into mob-inspiring fear, dumb things happen. I would much rather that when we look at new things, we take a more realistic approach to them, and look at ways we can keep the good parts of what they provide, while looking for ways to mitigate the downsides.
Hopefully without everyone going crazy in the meantime. Unfortunately, that’s not really the world we live in.
Last year, when everyone was focused on generative AI for images, we had Rob Sheridan on the podcast to talk about why it was important for creative people to figure out how to embrace the technology rather than fear it. The opening story of the recent NY Times profile of me was all about me in a group chat, trying to suggest to some very creative Hollywood folks how to embrace AI rather than simply raging against it. And I’ve already called out how folks rushing to copyright, thinking that will somehow “save” them from AI, are barking up the wrong tree.
But, in the meantime, the fear over AI is leading to some crazy and sometimes unfortunate outcomes. Benji Smith, who created what appears to be an absolutely amazing tool for writers, Shaxpir, also created what looked like an absolutely fascinating tool called Prosecraft, that had scanned and analyzed a whole bunch of books and would let you call up really useful data on books.
He created it years ago, based on an idea he had years earlier, trying to understand the length of various books (which he initially kept in a spreadsheet). As Smith himself describes in a blog post:
I heard a story on NPR about how Kurt Vonnegut invented an idea about the “shapes of stories” by counting happy and sad words. The University of Vermont “Computational Story Lab” published research papers about how this technique could show the major plot points and the “emotional story arc” of the Harry Potter novels (as well as many many other books).
So I tried it myself and found that I could plot a graph of the emotional ups and downs of any story. I added those new “sentiment analysis” tools to the prosecraft website too.
When I ran out of books on my own shelves, I looked to the internet for more text that I could analyze, and I used web crawlers to find more books. I wanted to be mindful of the diversity of different stories, so I tried to find books by authors of every race and gender, from every different cultural and political background, writing in every different genre and exploring all different kinds of themes. Fiction and nonfiction and philosophy and science and religion and culture and politics.
Somewhere out there on the internet, I thought to myself, there was a new author writing a horror or romance or fantasy novel, struggling for guidance about how long to write their stories, how to write more vivid prose, and how much “passive voice” was too much or too little.
I wanted to give those budding storytellers a suite of “lexicographic” tools that they could use, to compare their own writing with the writing of authors they admire. I’ve been working in the field of computational linguistics and machine learning for 20+ years, and I was always frustrated that the fancy tools were only accessible to big businesses and government spy agencies. I wanted to bring that magic to everyone.
Frankly, all of that sounds amazing. And amazingly useful. Even more amazing is that he built it, and it worked. It would produce useful analysis of books, such as this example from Alice’s Adventures in Wonderland:
And, it could also do further analysis like the following:
This is all quite interesting. It’s also the kind of thing that data scientists do on all kinds of work for useful purposes.
Smith built Prosecraft into Shaxpir, again, making it a more useful tool. But, on Monday, some authors on the internet found out about it and lost their shit, leading Smith to shut the whole project down.
There seems to be a lot of misunderstanding about all of this. Smith notes that he had researched the copyright issues and was sure he wasn’t violating anything, and he’s right. We’ve gone over this many times before. Scanning books is pretty clearly fair use. What you do with that later could violate copyright law, but I don’t see anything that Prosecraft did that comes anywhere even remotely close to violating copyright law.
But… some authors got pretty upset about all of it.
I’m still perplexed at what the complaint is here? You don’t need to “consent” for someone to analyze your book. You don’t need to “consent” to someone putting up statistics about their analysis of your book.
But, Zach’s tweet went viral with a bunch of folks ready to blow up anything that smacks of tech bro AI, and lots of authors started yelling at Smith.
The Gizmodo article has a ridiculously wrong “fair use” analysis, saying “Fair Use does not, by any stretch of the imagination, allow you to use an author’s entire copyrighted work without permission as a part of a data training program that feeds into your own ‘AI algorithm.’” Except… it almost certainly does? Again, we’ve gone through this with the Google Book scanning case, and the courts said that you can absolutely do that because it’s transformative.
It seems that what really tripped up people here was the “AI” part of it, and the fear that this was just another a VC funded “tech bro” exercise of building something to get rich by using the works of creatives. Except… none of that is accurate. As Smith explained in his blog post:
For what it’s worth, the prosecraft website has never generated any income. The Shaxpir desktop app is a labor of love, and during most of its lifetime, I’ve worked other jobs to pay the bills while trying to get the company off the ground and solve the technical challenges of scaling a startup with limited resources. We’ve never taken any VC money, and the whole company is a two-person operation just working our hardest to serve our small community of authors.
He also recognizes that the concerns about it being some “AI” thing are probably what upset people, but plenty of authors have found the tool super useful, and even added their own books:
I launched the prosecraft website in the summer of 2017, and I started showing it off to authors at writers conferences. The response was universally positive, and I incorporated the prosecraft analytic tools into the Shaxpir desktop application so that authors could privately run these analytics on their own works-in-progress (without ever sharing those analyses publicly, or even privately with us in our cloud).
I’ve spent thousands of hours working on this project, cleaning up and annotating text, organizing and tweaking things. A small handful of authors have even reached out to me, asking to have their books added to the website. I was grateful for their enthusiasm.
But in the meantime, “AI” became a thing.
And the arrival of AI on the scene has been tainted by early use-cases that allow anyone to create zero-effort impersonations of artists, cutting those creators out of their own creative process.
That’s not something I ever wanted to participate in.
Smith took the project down entirely because of that. He doesn’t want to get lumped in with other projects, and even though his project is almost certainly legal, he recognized that this was becoming an issue:
Today the community of authors has spoken out, and I’m listening. I care about you, and I hear your objections.
Your feelings are legitimate, and I hope you’ll accept my sincerest apologies. I care about stories. I care about publishing. I care about authors. I never meant to hurt anyone. I only hoped to make something that would be fun and useful and beautiful, for people like me out there struggling to tell their own stories.
I find all of this really unfortunate. Smith built something really cool, really amazing, that does not, in any way, infringe on anyone’s rights. I get the kneejerk reaction from some authors, who feared that this was some obnoxious project, but couldn’t they have taken 10 minutes to look at the details of what it was they were killing?
I know we live in an outrage era, where the immediate reaction is to turn the outrage meter up to 11. I’m certainly guilty of that at times myself. But this whole incident is just sad. It was an overreaction from the start, destroying what had been a clear labor of love and a useful project, through misleading and misguided attacks from authors.
And here we go again. we’ve been talking about how copyright has gotten in the way of cultural preservation generally for a while, and more specifically lately when it comes to the video game industry. The way this problem manifests itself is quite simple: video game publishers support the games they release for some period of time and then they stop. When they stop, depending on the type of game, it can make that game unavailable for legitimate purchase or use, either because the game is disappeared from retail and online stores, or because the servers needed to make them operational are taken offline. Meanwhile, copyright law prevents individuals and, in some cases, institutions from preserving and making those games available to the public, a la a library or museum would.
When you make these preservation arguments, one of the common retorts you get from the gaming industry and its apologists is that publishers already preserve these games for eventual re-release down the road, which is why they need to maintain their copyright protection on that content. We’ve pointed out failures to do so by the industry in the past, but the story about Hasbro wanting to re-release several older Transformers video games, but can’t, is about as perfect an example as I can find.
Released in June 2010, Transformers: War for Cybertron was a well-received third-person shooter that got an equally great sequel in 2012, Fall of Cybertron. (And then in 2014 we got Rise of Dark Spark, which wasn’t very good and was tied into the live-action films.) What made the first two games so memorable and beloved was that they told their own stories about the origins of popular characters like Megatron and Optimus Prime while featuring kick-ass combat that included the ability to transform into different vehicles. Sadly, in 2018, all of these Activision-published Transformers games (and several it commissioned from other developers) were yanked from digital stores, making them hard to acquire and play in 2023. It seems that Hasbro now wants that to change, suggesting the games could make a perfect fit for Xbox Game Pass, once Activision, uh…finds them.
You read that right: finds them. What does that mean? Well, when Hasbro came calling to Activision looking to see if this was a possibility, it devolved into Activision doing a theatrical production parody called Dude, Where’s My Hard Drive? It seems that these games may or may not exist on some piece of hardware, but Activision literally cannot find it. Or maybe not, as you’ll read below. There seems to be some confusion about what Activision can and cannot find.
And, yes, the mantra in the comments that pirate sites are essentially solving for this problem certainly applies here as well. So much so, in fact, that it sure sounds like Hasbro went that route to get what it needed for the toy design portion of this.
Interestingly, Activision’s lack of organization seems to have caused some headaches for Hasbro’s toy designers who are working on the Gamer Edition figures. The toy company explained that it had to load up the games on their original platforms and play through them to find specific details they wanted to recreate for the toys.
“For World of Cybertron we had to rip it ourselves, because [Activision] could not find it—they kept sending concept art instead, which we didn’t want,” explained Hasbro. “So we booted up an old computer and ripped them all out from there. Which was a learning experience and a long weekend, because we just wanted to get it right, so that’s why we did it like that.
What’s strange is that despite the above, Activision responded to initial reports of all this indicating that the headlines were false and it does have… code. Or something.
Hasbro itself then followed up apologizing for the confusion, also saying that it made an error in stating the games were “lost”. But what’s strange about all that, in addition to the work that Hasbro did circumventing having access to the actual games themselves, is the time delta it took for Activision to respond to all of this.
Activision has yet to confirm if it actually knows where the source code for the games is specifically located. I also would love to know why Activision waited so long to comment (the initial interview was posted on July 28) and why Hasbro claimed to not have access to key assets when developing its toys based on the games.
It’s also strange that Hasbro, which says it wants to put these games on Game Pass, hasn’t done so for years now. If the games aren’t lost, give ‘em to Hasbro, then?
Indeed. If this was all a misunderstanding, so be it. But if this was all pure misunderstanding, the rest of the circumstances surrounding this story don’t make a great deal of sense. At the very least, it sounds like some of the concern that these games could have simply been lost to the world is concerning and yet another data point for an industry that simply needs to do better when it comes to preservation efforts.
A new study reports conclusive evidence for the breakdown of standard gravity in the low acceleration limit from a verifiable analysis of the orbital motions of long-period, widely separated, binary stars, usually referred to as wide binaries in astronomy and astrophysics.
The study carried out by Kyu-Hyun Chae, professor of physics and astronomy at Sejong University in Seoul, used up to 26,500 wide binaries within 650 light years (LY) observed by European Space Agency’s Gaia space telescope. The study was published in the 1 August 2023 issue of the Astrophysical Journal.
For a key improvement over other studies Chae’s study focused on calculating gravitational accelerations experienced by binary stars as a function of their separation or, equivalently the orbital period, by a Monte Carlo deprojection of observed sky-projected motions to the three-dimensional space.
Chae explains, “From the start it seemed clear to me that gravity could be most directly and efficiently tested by calculating accelerations because gravitational field itself is an acceleration. My recent research experiences with galactic rotation curves led me to this idea. Galactic disks and wide binaries share some similarity in their orbits, though wide binaries follow highly elongated orbits while hydrogen gas particles in a galactic disk follow nearly circular orbits.”
Also, unlike other studies Chae calibrated the occurrence rate of hidden nested inner binaries at a benchmark acceleration.
The study finds that when two stars orbit around with each other with accelerations lower than about one nanometer per second squared start to deviate from the prediction by Newton’s universal law of gravitation and Einstein’s general relativity.
For accelerations lower than about 0.1 nanometer per second squared, the observed acceleration is about 30 to 40% higher than the Newton-Einstein prediction. The significance is very high meeting the conventional criteria of 5 sigma for a scientific discovery. In a sample of 20,000 wide binaries within a distance limit of 650 LY two independent acceleration bins respectively show deviations of over 5 sigma significance in the same direction.
Because the observed accelerations stronger than about 10 nanometer per second squared agree well with the Newton-Einstein prediction from the same analysis, the observed boost of accelerations at lower accelerations is a mystery. What is intriguing is that this breakdown of the Newton-Einstein theory at accelerations weaker than about one nanometer per second squared was suggested 40 years ago by theoretical physicist Mordehai Milgrom at the Weizmann Institute in Israel in a new theoretical framework called modified Newtonian dynamics (MOND) or Milgromian dynamics in current usage.
Moreover, the boost factor of about 1.4 is correctly predicted by a MOND-type Lagrangian theory of gravity called AQUAL, proposed by Milgrom and the late physicist Jacob Bekenstein. What is remarkable is that the correct boost factor requires the external field effect from the Milky Way galaxy that is a unique prediction of MOND-type modified gravity. Thus, what the wide binary data show are not only the breakdown of Newtonian dynamics but also the manifestation of the external field effect of modified gravity.
On the results, Chae says, “It seems impossible that a conspiracy or unknown systematic can cause these acceleration-dependent breakdown of the standard gravity in agreement with AQUAL. I have examined all possible systematics as described in the rather long paper. The results are genuine. I foresee that the results will be confirmed and refined with better and larger data in the future. I have also released all my codes for the sake of transparency and to serve any interested researchers.”
Unlike galactic rotation curves in which the observed boosted accelerations can, in principle, be attributed to dark matter in the Newton-Einstein standard gravity, wide binary dynamics cannot be affected by it even if it existed. The standard gravity simply breaks down in the weak acceleration limit in accordance with the MOND framework.
Implications of wide binary dynamics are profound in astrophysics, theoretical physics, and cosmology. Anomalies in Mercury’s orbits observed in the nineteenth century eventually led to Einstein’s general relativity.
Now anomalies in wide binaries require a new theory extending general relativity to the low acceleration MOND limit. Despite all the successes of Newton’s gravity, general relativity is needed for relativistic gravitational phenomena such as black holes and gravitational waves. Likewise, despite all the successes of general relativity, a new theory is needed for MOND phenomena in the weak acceleration limit. The weak-acceleration catastrophe of gravity may have some similarity to the ultraviolet catastrophe of classical electrodynamics that led to quantum physics.
Wide binary anomalies are a disaster to the standard gravity and cosmology that rely on dark matter and dark energy concepts. Because gravity follows MOND, a large amount of dark matter in galaxies (and even in the universe) are no longer needed. This is also a big surprise to Chae who, like typical scientists, “believed in” dark matter until a few years ago.
A new revolution in physics seems now under way. Milgrom says, “Chae’s finding is a result of a very involved analysis of cutting-edge data, which, as far as I can judge, he has performed very meticulously and carefully. But for such a far-reaching finding—and it is indeed very far reaching—we require confirmation by independent analyses, preferably with better future data.”
“If this anomaly is confirmed as a breakdown of Newtonian dynamics, and especially if it indeed agrees with the most straightforward predictions of MOND, it will have enormous implications for astrophysics, cosmology, and for fundamental physics at large.”
Xavier Hernandez, professor at UNAM in Mexico who first suggested wide binary tests of gravity a decade ago, says, “It is exciting that the departure from Newtonian gravity that my group has claimed for some time has now been independently confirmed, and impressive that this departure has for the first time been correctly identified as accurately corresponding to a detailed MOND model. The unprecedented accuracy of the Gaia satellite, the large and meticulously selected sample Chae uses and his detailed analysis, make his results sufficiently robust to qualify as a discovery.”
Pavel Kroupa, professor at Bonn University and at Charles University in Prague, has come to the same conclusions concerning the law of gravitation. He says, “With this test on wide binaries as well as our tests on open star clusters nearby the sun, the data now compellingly imply that gravitation is Milgromian rather than Newtonian. The implications for all of astrophysics are immense.”
More information: Kyu-Hyun Chae, Breakdown of the Newton–Einstein Standard Gravity at Low Acceleration in Internal Dynamics of Wide Binary Stars, The Astrophysical Journal (2023). DOI: 10.3847/1538-4357/ace101
China has released draft regulations to govern the country’s facial recognition technology that include prohibitions on its use to analyze race or ethnicity.
According to the the Cyberspace Administration of China(CAC), the purpose is to “regulate the application of face recognition technology, protect the rights and interests of personal information and other personal and property rights, and maintain social order and public safety” as outlined by a smattering of data security, personal information, and network laws.
The draft rules, which are open for comments until September 7, include some vague directives not to use face recognition technology to disrupt social order, endanger national security, or infringe on the rights of individuals and organizations.
The rules also state that facial recognition tech must be used only when there is a specific purpose and sufficient necessity, strict protection measures are taken, and only when non-biometric measures won’t do.
It makes requirements to obtain consent before processing face information, except for cases where it’s not required, which The Reg assumes means for individuals such as prisoners and in instances of national security. Parental or guardian consent is needed for those under the age of 14.
Building managers can’t require its use to enter and exit property – they must provide alternative measures of verifying a personal identity for those who want it.
It also can’t be leaned into for “major personal interests” such as social assistance and real estate disposal. For that, manual verification of personal identity must be used with facial recognition used only as an auxiliary means of verifying personal identity.
And collecting images for internal management should only be done in a reasonably sized area.
In businesses like hotels, banks, airports, art galleries, and more, the tech should not be used to verify personal identity. If the individual chooses to link their identity to the image, they should be informed either verbally or in writing and provide consent.
Collecting images is also not allowed in private spaces like hotel rooms, public bathrooms, and changing rooms.
Furthermore, those using facial surveillance techniques must display reminder signs, and personal images along with identification information must also be kept confidential, and only anonymized data may be saved.
Under the draft regs, those that store face information of more than 10,000 people must register with a local branch of the CAC within 30 working days.
Most interesting, however, is Article 11, which, when translated from Chinese via automated tools, reads:
No organization or individual shall use face recognition technology to analyze personal race, ethnicity, religion, sensitive personal information such as beliefs, health status, social class, etc.
The CAC does not say if the Chinese Communist Party counts as an “organization.”
Human rights groups have credibly asserted that Uyghurs are routinely surveilled using facial recognition technology, in addition to being incarcerated, required to perform forced labor, re-educated to abandon their beliefs and cultural practices, and may even be subjected to sterilization campaigns.
Just last month, physical security monitoring org IPVM reported it came into possession of a contract between China-based Hikvision and Hainan Province’s Chengmai County for $6 million worth of cameras that could detect whether a person was ethnically Uyghur using minority recognition technology.
Hikvision denied the report and said it last provided such functionality in 2018.
Beyond facilitating identification of Uyghurs, it’s clear the cat is out of the bag when it comes to facial recognition technology in China by both government and businesses alike. Local police use it to track down criminals and its use feeds into China’s social credit system.
“‘Sky Net,’ a facial recognition system that can scan China’s population of about 1.4 billion people in a second, is being used in 16 Chinese cities and provinces to help police crackdown on criminals and improve security,” said state-sponsored media in 2018.
Regardless, the CAC said those violating the new draft rules once passed would be held to criminal and civil liability.
[…] Reuters reports that scientists with the Lawrence Livermore National Laboratory’s National Ignition Facility in California repeated a fusion ignition reaction. The lab’s first breakthrough was announced by the U.S. Department of Energy in December. While the previous experiment produced net energy gain, a spokesperson from the lab told the outlet that this second experiment, conducted on July 30, produced an even higher energy yield. While the laboratory called the experiment a success, results from the test are still being analyzed.
[…]
While fusion reactions are a staple in physics, scientists previously had to grapple with the notion that they required more energy in than they produced, making the net energy gain in both reactions a noteworthy result. The Department of Energy revealed in its December announcement that the fusion test conducted by the laboratory at that time required 2 megajoules of energy while it produced 3 megajoules of energy. The previous fusion experiment conducted at the National Ignition Facility used 192 lasers focused on a peppercorn-sized target. Those lasers create temperatures as high as 100 million degrees Fahrenheit and pressures of over 100 billion Earth atmospheres in order to induce a fusion reaction in the target.
The researchers point out that people don’t expect sound-based exploits. The paper reads, “For example, when typing a password, people will regularly hide their screen but will do little to obfuscate their keyboard’s sound.”
The technique uses the same kind of attention network that makes models like ChatGPT so powerful. It seems to work well, as the paper claims a 97% peak accuracy over both a telephone or Zoom. In addition, where the model was wrong, it tended to be close, identifying an adjacent keystroke instead of the correct one. This would be easy to correct for in software, or even in your brain as infrequent as it is. If you see the sentence “Paris im the s[ring,” you can probably figure out what was really typed.