MIT Boffins Build Battery Alternative Out of Cement, Carbon Black, and Water

Long-time Slashdot reader KindMind shares a report from The Register: Researchers at MIT claim to have found a novel new way to store energy using nothing but cement, a bit of water, and powdered carbon black — a crystalline form of the element. The materials can be cleverly combined to create supercapacitors, which could in turn be used to build power-storing foundations of houses, roadways that could wirelessly charge vehicles, and serve as the foundation of wind turbines and other renewable energy systems — all while holding a surprising amount of energy, the team claims. According to a paper published in the Proceedings of the National Academy of Sciences, 45 cubic meters of the carbon-black-doped cement could have enough capacity to store 10 kilowatt-hours of energy — roughly the amount an average household uses in a day. A block of cement that size would measure about 3.5 meters per side and, depending on the size of the house, the block could theoretically store all the energy an off-grid home using renewables would need.” […]

Just three percent of the mixture has to be carbon black for the hardened cement to act as a supercapacitor, but the researchers found that a 10 percent carbon black mixture appears to be ideal. Beyond that ratio, the cement becomes less stable — not something you want in a building or foundation. The team notes that non-structural use could allow higher concentrations of carbon black, and thus higher energy storage capacity. The team has only built a tiny one-volt test platform using its carbon black mix, but has plans to scale up to supercapacitors the same size as a 12-volt automobile battery — and eventually to the 45 cubic meter block. Along with being used for energy storage, the mix could also be used to provide heat — by applying electricity to the conductive carbon network encased in the cement, MIT noted.
As Science magazine puts it, “Tesla’s Powerwall, a boxy, wall-mounted, lithium-ion battery, can power your home for half a day or so. But what if your home was the battery?”

Source: MIT Boffins Build Battery Alternative Out of Cement, Carbon Black, and Water – Slashdot

Will Browsers Be Required By Law To Stop You From Visiting Infringing Sites?

Mozilla’s Open Policy & Advocacy blog has news about a worrying proposal from the French government:

In a well-intentioned yet dangerous move to fight online fraud, France is on the verge of forcing browsers to create a dystopian technical capability. Article 6 (para II and III) of the SREN Bill would force browser providers to create the means to mandatorily block websites present on a government provided list.

The post explains why this is an extremely dangerous approach:

A world in which browsers can be forced to incorporate a list of banned websites at the software-level that simply do not open, either in a region or globally, is a worrying prospect that raises serious concerns around freedom of expression. If it successfully passes into law, the precedent this would set would make it much harder for browsers to reject such requests from other governments.

If a capability to block any site on a government blacklist were required by law to be built in to all browsers, then repressive governments would be given an enormously powerful tool. There would be no way around that censorship, short of hacking the browser code. That might be an option for open source coders, but it certainly won’t be for the vast majority of ordinary users. As the Mozilla post points out:

Such a move will overturn decades of established content moderation norms and provide a playbook for authoritarian governments that will easily negate the existence of censorship circumvention tools.

It is even worse than that. If such a capability to block any site were built in to browsers, it’s not just authoritarian governments that would be rubbing their hands with glee: the copyright industry would doubtless push for allegedly infringing sites to be included on the block list too. We know this, because it has already done it in the past, as discussed in Walled Culture the book (free digital versions).

Not many people now remember, but in 2004, BT (British Telecom) caused something of a storm when it created CleanFeed:

British Telecom has taken the unprecedented step of blocking all illegal child pornography websites in a crackdown on abuse online. The decision by Britain’s largest high-speed internet provider will lead to the first mass censorship of the web attempted in a Western democracy.

Here’s how it worked:

Subscribers to British Telecom’s internet services such as BTYahoo and BTInternet who attempt to access illegal sites will receive an error message as if the page was unavailable. BT will register the number of attempts but will not be able to record details of those accessing the sites.

The key justification for what the Guardian called “the first mass censorship of the web attempted in a Western democracy” was that it only blocked illegal child sexual abuse material Web sites. It was therefore an extreme situation requiring an exceptional solution. But seven years later, the copyright industry were able to convince a High Court judge to ignore that justification, and to take advantage of CleanFeed to block a site, Newzbin 2, that had nothing to do with child sexual abuse material, and therefore did not require exceptional solutions:

Justice Arnold ruled that BT must use its blocking technology CleanFeed – which is currently used to prevent access to websites featuring child sexual abuse – to block Newzbin 2.

Exactly the logic used by copyright companies to subvert CleanFeed could be used to co-opt the censorship capabilities of browsers with built-in Web blocking lists. As with CleanFeed, the copyright industry would doubtless argue that since the technology already exists, why not to apply it to tackling copyright infringement too?

That very real threat is another reason to fight this pernicious, misguided French proposal. Because if it is implemented, it will be very hard to stop it becoming yet another technology that the copyright world demands should be bent to its own selfish purposes.

Source: Will Browsers Be Required By Law To Stop You From Visiting Infringing Sites? | Techdirt

Very scary indeed

Academic Book About Emojis Can’t Include The Emojis It Talks About Because Of Copyright

Jieun Kiaer, an Oxford professor of Korean linguistics, recently published an academic book called Emoji Speak: Communications and Behaviours on Social Media. As you can tell from the name, it’s a book about emoji, and about how people communicate with them:

Exploring why and how emojis are born, and the different ways in which people use them, this book highlights the diversity of emoji speak. Presenting the results of empirical investigations with participants of British, Belgian, Chinese, French, Japanese, Jordanian, Korean, Singaporean, and Spanish backgrounds, it raises important questions around the complexity of emoji use.

Though emojis have become ubiquitous, their interpretation can be more challenging. What is humorous in one region, for example, might be considered inappropriate or insulting in another. Whilst emoji use can speed up our communication, we might also question whether they convey our emotions sufficiently. Moreover, far from belonging to the youth, people of all ages now use emoji speak, prompting Kiaer to consider the future of our communication in an increasingly digital world.

Sounds interesting enough, but as Goldman highlights with an image from the book, Kiaer was apparently unable to actually show examples of many of the emoji she was discussing due to copyright fears. While companies like Twitter and Google have offered up their own emoji sets under open licenses, not all of them have, and some of the specifics about the variations in how different companies represent different emoji apparently were key to the book.

So, for those, Kiaer actually hired an artist, Loli Kim, to draw similar emoji!

Note on Images of Emojis

The page reads as follows (with paragraph breaks added for readability):

Notes on Images of Emojis

Social media spaces are almost entirely copyright free. They do not follow the same rules as the offline world. For example, on Twitter you can retweet any tweet and add your own opinion. On Instagram, you can share any post and add stickers or text. On TikTok, you can even ‘duet’ a video to add your own video next to a pre-existing one. As much as each platform has its own rules and regulations, people are able to use and change existing material as they wish. Thinking about copyright brings to light barriers that exist between the online and offline worlds. You can use any emoji in your texts, tweets, posts and videos, but if you want to use them in the offline world, you may encounter a plethora of copyright issues.

In writing this book, I have learnt that online and offline exist upon two very different foundations. I originally planned to have plenty of images of emojis, stickers, and other multi-modal resources featured throughout this book, but I have been unable to for copyright reasons. In this moment, I realized how difficult it is to move emojis from the online world into the offline world.

Even though I am writing this book about emojis and their significance in our lives, I cannot use images of them in even an academic book. Were I writing a tweet or Instagram post, however, I would likely have no problem. Throughout this book, I stress that emoji speak in online spaces is a grassroots movement in which there are no linguistic authorities and corporations have little power to influence which emojis we use. Comparatively, in offline spaces, big corporations take ownership of our emoji speak, much like linguistic authorities dictate how we should write and speak properly.

This sounds like something out of a science fiction story, but it is an important fact of which to be aware. While the boundaries between our online and offline words may be blurring, barriers do still exist between them. For this reason, I have had to use an artist’s interpretation of the images that I originally had in mind for this book. Links to the original images have been provided as endnotes, in case readers would like to see them.

Just… incredible. Now, my first reaction to this is that using the emoji and stickers and whatnot in the book seems like a very clear fair use situation. But… that requires a publisher willing to take up the fight (and an insurance company behind the publisher willing to finance that fight). And, that often doesn’t happen. Publishers are notoriously averse to supporting fair use, because they don’t want to get sued.

But, really, this just ends up highlighting (once again) the absolute ridiculousness of copyright in the modern world. No one in their right mind would think that a book about emoji is somehow harming the market for whatever emoji or stickers the professor wished to include. Yet, due to the nature of copyright, here we are. With an academic book about emoji that can’t even include the emoji being spoken about.

Source: Academic Book About Emojis Can’t Include The Emojis It Talks About Because Of Copyright | Techdirt

AI-assisted mammogram cancer screening could cut radiologist workloads in half

A newly published study in the the Lancet Oncology journal has found that the use of AI in mammogram cancer screening can safely cut radiologist workloads nearly in half without risk of increasing false-positive results. In effect, the study found that the AI’s recommendations were on par with those of two radiologists working together.

“AI-supported mammography screening resulted in a similar cancer detection rate compared with standard double reading, with a substantially lower screen-reading workload, indicating that the use of AI in mammography screening is safe,” the study found.

The study was performed by a research team out of Lund University in Sweden and, accordingly, followed 80,033 Swedish women (average age of 54) for just over a year in 2021-2022 . Of the 39,996 patients that were randomly assigned AI-empowered breast cancer screenings, 28 percent or 244 tests returned screen-detected cancers. Of the other 40,024 patients that received conventional cancer screenings, just 25 percent, or 203 tests, returned screen-detected cancers.

Of those extra 41 cancers detected by the AI side, 19 turned out to be invasive. Both the AI-empowered and conventional screenings ran a 1.5 percent false positive rate. Most impressively, radiologists on the the AI side had to look at 36,886 fewer screen readings than their counterparts, a 44 percent reduction in their workload.

[…]

Source: AI-assisted cancer screening could cut radiologist workloads in half | Engadget

Microsoft Comes Under Blistering Criticism For ‘Grossly Irresponsible’ Azure Security

An anonymous reader quotes a report from Ars Technica: Microsoft has once again come under blistering criticism for the security practices of Azure and its other cloud offerings, with the CEO of security firm Tenable saying Microsoft is “grossly irresponsible” and mired in a “culture of toxic obfuscation.” The comments from Amit Yoran, chairman and CEO of Tenable, come six days after Sen. Ron Wyden (D-Ore.) blasted Microsoft for what he said were “negligent cybersecurity practices” that enabled hackers backed by the Chinese government to steal hundreds of thousands of emails from cloud customers, including officials in the US Departments of State and Commerce. Microsoft has yet to provide key details about the mysterious breach, which involved the hackers obtaining an extraordinarily powerful encryption key granting access to a variety of its other cloud services. The company has taken pains ever since to obscure its infrastructure’s role in the mass breach.

On Wednesday, Yoran took to LinkedIn to castigate Microsoft for failing to fix what the company said on Monday was a “critical” issue that gives hackers unauthorized access to data and apps managed by Azure AD, a Microsoft cloud offering for managing user authentication inside large organizations. Monday’s disclosure said that the firm notified Microsoft of the problem in March and that Microsoft reported 16 weeks later that it had been fixed. Tenable researchers told Microsoft that the fix was incomplete. Microsoft set the date for providing a complete fix to September 28.

“To give you an idea of how bad this is, our team very quickly discovered authentication secrets to a bank,” Yoran wrote. “They were so concerned about the seriousness and the ethics of the issue that we immediately notified Microsoft.” He continued: “Did Microsoft quickly fix the issue that could effectively lead to the breach of multiple customers’ networks and services? Of course not. They took more than 90 days to implement a partial fix — and only for new applications loaded in the service.” In response, Microsoft officials wrote: “We appreciate the collaboration with the security community to responsibly disclose product issues. We follow an extensive process involving a thorough investigation, update development for all versions of affected products, and compatibility testing among other operating systems and applications. Ultimately, developing a security update is a delicate balance between timeliness and quality, while ensuring maximized customer protection with minimized customer disruption.” Microsoft went on to say that the initial fix in June “mitigated the issue for the majority of customers” and “no customer action is required.”

In a separate email, Yoran responded: “It now appears that it’s either fixed, or we are blocked from testing. We don’t know the fix, or mitigation, so hard to say if it’s truly fixed, or Microsoft put a control in place like a firewall rule or ACL to block us. When we find vulns in other products, vendors usually inform us of the fix so we can validate it effectively. With Microsoft Azure that doesn’t happen, so it’s a black box, which is also part of the problem. The ‘just trust us’ lacks credibility when you have the current track record.”

Source: Microsoft Comes Under Blistering Criticism For ‘Grossly Irresponsible’ Security – Slashdot

A great example of why a) closed source software is a really bad idea, b) why responsible disclosure is a good idea and c) why cloud is often a bad idea

IBM and NASA open source satellite-image-labeling AI model

IBM and NASA have put together and released Prithvi: an open source foundation AI model that may help scientists and other folks analyze satellite imagery.

The vision transformer model, released under an Apache 2 license, is relatively small at 100 million parameters, and was trained on a year’s worth of images collected by the US space boffins’ Harmonized Landsat Sentinel-2 (HLS) program. As well as the main model, three variants of Prithvi are available, fine-tuned for identifying flooding; wildfire burn scars; and crops and other land use.

Essentially, it works like this: you feed one of the models an overhead satellite photo, and it labels areas in the snap it understands. For example, the variant fine-tuned for crops can point out where there’s probably water, forests, corn fields, cotton fields, developed land, wetlands, and so on.

This collection, we imagine, would be useful for, say, automating the study of changes to land over time – such as tracking erosion from flooding, or how drought and wildfires have hit a region. Big Blue and NASA aren’t the first to do this with machine learning: there are plenty of previous efforts we could cite.

A demo of the crop-classifying Prithvi model can be found here. Provide your own satellite imagery or use one of the examples at the bottom of the page. Click Submit to run the model live.

“We believe that foundation models have the potential to change the way observational data is analyzed and help us to better understand our planet,” Kevin Murphy, chief science data officer at NASA, said in a statement. “And by open sourcing such models and making them available to the world, we hope to multiply their impact.”

Developers can download the models from Hugging Face here.

There are other online demos of Prithvi, such as this one for the variant fine-tuned for bodies of water; this one for detecting wildfire scars; and this one that shows off the model’s ability to reconstruct partially photographed areas.

[…]

Source: IBM and NASA open source satellite-image-labeling AI model • The Register

Couple admit laundering $4B of stolen Bitfinex Bitcoins

Ilya Lichtenstein and Heather Morgan on Thursday pleaded guilty to money-laundering charges related to the 2016 theft of some 120,000 Bitcoins from Hong Kong-based Bitfinex.

The Feds arrested Lichtenstein, 35, and Morgan, 33, in February 2022 following the US government’s tracing of about 95,000 of the stolen BTC – worth about $3.6 billion at the time and $2.8 billion today – to digital wallets controlled by the married couple.

The Justice Department at the time described the seizure as the largest ever and has since recovered an additional $475 million.

[…]

Lichtenstein admitted in court he gained access into Bitfinex’s network using unidentified tools and techniques. According to prosecutors, once inside, Lichtenstein proceeded to initiate more than 2,000 fraudulent transactions that sent 119,754 bitcoin from Bitfinex into a cryptocurrency wallet he controlled.

Thereafter, the Justice Department said, he tried to cover his tracks by deleting access credentials and log files, and then involved Morgan to help launder the stolen funds by transferring them through a maze of financial accounts. At one point Lichtenstein used some of the funds to buy gold coins, which were then buried by Morgan.

An affidavit [PDF] from IRS investigator Christopher Janczewski, which documents the basis of the US government’s case, traces the flow of stolen funds through multiple accounts associated with the defendants.

[…]

 

Source: Couple admit laundering $4B of stolen Bitfinex Bitcoins • The Register

Special License For Supercars Will Be Required In South Australia by 2024

The state of South Australia, home to 1.8 million people, is treading that well-worn path with new laws regulating the use of “ultra high-powered vehicles” on the road.

The issue stems from a fatal crash in 2019 when 15-year-old Sophia Naismith was tragically struck and killed by an out-of-control Lamborghini Huracan driven by Alexander Campbell. After Campbell avoided jail with a suspended sentence last year, community backlash created a political case for change. As covered by Drive.com.au, the government has now implemented a raft of new road laws in response.

The laws designate a new class of “ultra high-powered vehicles” (UHPV). This covers any such vehicle with a power-to-weight ratio of 276 kW/metric tonne (407 horsepower/US ton) and a gross mass under 4.5 tonne (9920 pounds). Roughly 200 models are currently expected to fall into this classification, with buses and motorbikes exempt from the rules. This classification would notably include the Lamborghini Huracan, which boasts a power-to-weight ratio of 292 kW/tonne (431 hp/US ton). For reference, another sports car, the base Chevrolet Corvette, comes in at 242kW/ton and is not subject to these rules. The 670-horsepower Z06 version of that car is, though.

After December 1st, 2024, those wishing to drive a UHPV must hold a special ‘U Class’ license. Obtaining this license requires passing an online course currently in development by the government of the region, based in Adelaide. Furthermore, drivers must have held a regular car or heavy vehicle license for at least three years to be eligible for a U license. There will be no retroactive exemptions, with all current drivers wishing to drive UHPV-class cars required to take the course.

Another major change, as reported by MSN, makes it a criminal offense to disable traction control and other driver aids in an “ultra high-powered vehicle.” Specifically, the rule applies to “anti-lock braking, automated emergency braking, electronic stability control or traction control” systems, but not lane-keeping assists and parking sensors.

Drivers breaking this rule will be subject to penalties of up to $5,000 AUD. However, reasonable defenses include switching off driver aids in conditions where justified, such as if the vehicle is stuck. Similarly, a further defense exists if the driver did not disable the system themselves and was unaware of the situation. They will have to prove that, of course.

Meanwhile, if a driver crashes while in “sports mode” or with traction control disabled, and that incident causes death or serious harm, the driver will be charged with an “aggravated offense” which comes with new harsher penalties. For example, prior to the change, the charge of “aggravated driving without due care causing death” carried a maximum 12-month jail penalty and six-month driving disqualification. That has now been upgraded to seven years in jail and three years of disqualification. This bears a direct relation to Campbell’s crash, which was alleged to have happened indirectly due to the use of the Huracan’s sports mode.

[…]

Source: Special License For Supercars Will Be Required In South Australia by 2024

Whilst I agree with the idea of needing a supercar license, making it an offense to turn off driving aids is a bit sketchy for me…

Tesla Hackers Find ‘Unpatchable’ Jailbreak to Unlock Paid Features for Free

A security researcher along with three PhD students from Germany have reportedly found a way to exploit Tesla’s current AMD-based cars to develop what could be the world’s first persistent “Tesla Jailbreak.”

The team published a briefing ahead of their presentation at next week’s Blackhat 2023. There, they will present a working version of an attack against Tesla’s latest AMD-based media control unit (MCU). According to the researchers, the jailbreak uses an already-known hardware exploit against a component in the MCU, which ultimately enables access to critical systems that control in-car purchases—and perhaps even tricking the car into thinking these purchases are already paid for.

[.,..]

Tesla has started using this well-established platform to enable in-car purchases, not only for additional connectivity features but even for analog features like faster acceleration or rear heated seats. As a result, hacking the embedded car computer could allow users to unlock these features without paying.”

Separately, the attack will allow researchers to extract a vehicle-specific cryptography key that is used to authenticate and authorize a vehicle within Tesla’s service network.

According to the researchers, the attack is unpatchable on current cars, meaning that no matter what software updates are pushed out by Tesla, attackers—or perhaps even DIY hackers in the future—can run arbitrary code on Tesla vehicles as long as they have physical access to the car. Specifically, the attack is unpatchable because it’s not an attack directly on a Tesla-made component, but rather against the embedded AMD Secure Processor (ASP) which lives inside of the MCU.

[…]

Tesla is an offender of something many car owners hate: making vehicles with hardware installed, but locked behind software. For example, the RWD Model 3 has footwell lights installed from the factory, but they are software disabled. Tesla also previously locked the heated steering wheel function and heated rear seats behind a software paywall, but eventually began activating it on new cars at no extra cost in 2021. There’s also the $2,000 “Acceleration Boost” upgrade for certain cars that drops a half-second off of the zero to 60 time.

[…]

Source: Tesla Hackers Find ‘Unpatchable’ Jailbreak to Unlock Paid Features for Free