Dutch phones can be easily tracked online: ‘Extreme security risk’

a map of the netherlands with cellphone towers

BNR received more than 80 gigabytes of location data from data traders: the coordinates of millions of telephones, often registered dozens of times a day.

The gigantic mountain of data also includes movements of people with functions in which safety plays an important role. A senior army officer could be followed as he drove from his home in the Randstad to various military locations in the country. A destination he often visited was the Frederikazerne, headquarters of the Military Intelligence and Security Service (MIVD). The soldier confirmed the authenticity of the data to BNR by telephone.

[…]

The data also reveals the home address of someone who often visits the Penitentiary in Vught, where terrorists and serious criminals are imprisoned. A spokesperson for the Judicial Institutions Agency (DJI) confirmed that the person, who according to the Land Registry lives at this address, had actually brought a mobile phone onto the premises with permission and stated that the matter was being investigated.

These are just examples, the list of potential targets is long: up to 1,200 phones in the dataset visited the office in Zoetermeer where the National Police, National Public Prosecutor’s Office and Europol are located. Up to 70 telephones are registered in the King’s residential palace, Huis ten Bosch. At the Volkel Air Base, a storage point for nuclear weapons, up to 370 telephones were counted. The National Police’s management says it is aware of the problem and is ‘looking internally to see what measures are appropriate to combat this’.

‘National security implications’

BNR had two experts inspect the dataset. “This is an extreme security risk, with possible implications for national security,” says Ralph Moonen, technical director of Secura. “It’s really shocking that this can happen like this,” says Sjoerd van der Meulen, cybersecurity specialist at DataExpert.

The technology used to track mobile phones is designed for use by advertisers, but is suitable for other purposes, says Paul Pols, former technical advisor to the Assessment Committee for the Use of Powers, which supervises the intelligence services. According to Pols, it is known that the MIVD and AIVD also purchase access to this type of data on the data market under the heading ‘open sources’. “What is striking about this case is that you can easily access large amounts of data from Dutch citizens,” said the cybersecurity expert.

For sale via an online marketplace in Berlin

That access was achieved through an online marketplace based in Berlin. On this platform, Datarade.ai, hundreds of companies offer personal data for sale. In addition to location data, medical information and credit scores are also available.

Following a tip from a data subject, BNR responded to an advertisement offering location data of Dutch users. A sales employee of the platform then contacted two medium-sized providers: Datastream Group from Florida in the US and Factori.ai from Singapore – both companies have fewer than 50 employees, according to their LinkedIn pages.

Datastream and Factori offer similar services: a subscription to the location data of mobile phones in the Netherlands is available for prices starting from $2,000 per month. Those who pay more can receive fresh data every 24 hours via the cloud, possibly even from all over the world.

[…]

Upon request, BNR was therefore sent a full month of historical data from Dutch telephones. This data was anonymized – it did not contain telephone numbers. Individual phones can be recognized by unique number combinations, a ‘mobile advertising ID’ used by Apple and Google to show individual users relevant advertisements within the limits of European privacy legislation.

Possibly four million Dutch victims of tracking

The precise origin of the data traded online is unclear. According to the providers, these come from apps that have received permission from users to use location data. This includes fitness or navigation apps that sell data. This is how the data ultimately ends up at Factori and Datastream. By combining data from multiple sources, gigantic files are created.

[…]

it is not difficult to recognize the owners of individual phones in the data. By linking sleeping places to data from public registers, such as the Land Registry, and workplaces to LinkedIn profiles, BNR was able to identify, in addition to the army officer, a project manager from Alphen aan den Rijn and an amateur football referee. The discovery that he had been digitally stalked for at least a month led to shocked reactions. ‘Bizarre’, and: ‘I immediately turned off ‘sharing location data’ on my phone’.

Trade is prohibited, but the government does not act

Datarade, the Berlin data marketplace, informed BNR in an email that traders on their platform are ‘fully liable’ for the data they offer. Illegal practices can be reported using an online form. The spokesperson for the German company leaves open the question of whether measures are being taken against the sale of location data.

[…]

Source (Google Translate): Dutch phones can be secretly tracked online: ‘Extreme security risk’ | BNR News Radio

Source (Dutch original): Nederlandse telefoons online stiekem te volgen: ‘Extreem veiligheidsrisico’

Swarovski’s smart binoculars identify the birds, butterflies, mammals, you’re looking at and mark something to share with whoever you give the binocs to next

Swarovski has turned up at CES 2024 in Las Vegas with its first ever pair of smart binoculars that will identify the bird you’re looking at. All you have to do is point the gear at a bird and make sure the view is in focus, and then press down an action button. Within a few seconds, the system will overlay a bird’s name over your view, using data pulled from the Merlin Bird ID database. That has over 9,000 species tagged, and will even let you know the degree of certainty it has if the bird in question is in an unexpected location. And if this was the only feature these binoculars had, it’d be enough to justify the purchase, but that’s only the beginning of what these things can do.

Between the eyepieces, there’s a function wheel similar to one you would find on a camera that lets you cycle between various features. That includes a Wildlife ID version which hooks into its built-in Mammal, Dragonfly and Butterfly ID databases. Plus, there’s a camera which lets you send pictures and video to a paired smartphone, which would similarly be plenty to justify the expense. But the system is also designed to be expandable, with the focus wheel including space for any future custom databases you might need. For instance, one idea could be to build a database for stars, or airplane types for aviation fans to spot the make and model of what’s flying overhead.

Then there’s the discovery sharing feature, which enables you to share something you’ve found with whoever you’re outdoors with. All you need to do is tag whatever you’ve found, and then hand the AX Visio over to them, where a series of flashing arrows will guide them to where you were looking. Even in the busy halls of CES, one of the company’s representatives was able to pinpoint a far-off fire exit sign before handing me the binoculars and asking me to find it. All you need to do is follow the arrows straight to what you’re meant to be looking at with a system that’s as elegant as it is useful. There’s even a built-in compass that’ll let you identify which direction you’re gazing toward to help you navigate.

You might notice from the pictures that there are three lenses, with the central one holding the 13-megapixel sensor shooting HD-quality (1,920 x 1,080) pictures and video. There’s 8GB storage, which should hold up to an hour of video or 1,700 photos before needing to be cleared off. Beyond the smarts, the binoculars magnify up to 10x with 88 percent light transmission, thanks to the company’s high-end lenses. Swarovski says its glassware offers almost flat, distortion-free images with plenty of contrast and color fidelity.

Now, here’s the thing, my father-in-law is a serious ornithologist who is respected, at least among his peer group. His ability to spot the genus and species of a bird in flight is extraordinary and I’m often left bewildered at the depth of his knowledge. I don’t think I’d have the ability, patience or time to even get within a hundred miles of his capability. But, with a device like this, it might mean that I can at least vaguely keep up with him when we’re out on the trails.

The AX Visio is, however, not messing around with price, and Swarovski is charging €4,600 (around $5,000) for you to get this into your hands. While bird fans often have to be patient, this should start arriving at people’s homes at some point in February.

Source: Swarovski’s smart binoculars identify the birds you’re looking at

Ancient cities discovered in the Amazon are the largest yet found

Aerial surveys have revealed the largest pre-colonial cities in the Amazon yet discovered, linked by an extensive network of roads.

“The settlements are much bigger than others in the Amazon,” says Stéphen Rostain at the French National Center for Scientific Research in Paris. “They are comparable with Maya sites.”

What’s more, at between 3000 and 1500 years old, these cities are also older than other pre-Columbian ones discovered in the Amazon. Why the people who built them disappeared isn’t clear.

It is often assumed that the Amazon rainforest was largely untouched by humans before the Italian explorer Christopher Columbus reached the Americas in the 15th century. In fact, the first Europeans reported seeing many farms and towns in the region.

These reports, long dismissed, have in recent decades been backed up by discoveries of ancient earthworks and extensive dark soils created by farmers. One estimate puts the pre-Columbian population of the Amazon as high as 8 million.

[…]

In 2015, Rostain’s team did an aerial survey with lidar, a laser scanning technique that can create a detailed 3D map of the surface beneath most vegetation, revealing features not normally visible to us. The findings, which have only now been published, show that the settlements were far more extensive than anyone realised.

The survey revealed more than 6000 raised earthen platforms within an area of 300 square kilometres. These are where wooden buildings once stood – excavations have revealed post holes and fireplaces on these structures.

[…]

The survey also revealed a network of straight roads created by digging out soil and piling it on the sides. The longest extends for at least 25 kilometres, but might continue beyond the area that was surveyed.

[…]

“This is the largest complex with large settlements so far found in Amazonia,” says Charles Clement at the National Institute of Amazonian Research in Manaus, Brazil.

What’s more, it was found in a region of the Amazon that other researchers had concluded was sparsely inhabitated during pre-Columbian times, says Clement.

 

Journal reference:

Science DOI: 10.1126/science.adi6317

Source: Ancient cities discovered in the Amazon are the largest yet found | New Scientist

eBay Sent Critics a Bloody Pig Mask and more. Now It’s Paying a $3 Million Fine

eBay agreed to pay out a $3 million fine—the maximum criminal penalty—over a twisted scandal that saw top executives and other employees stalking a couple in Massachusetts who published a newsletter that criticized the company. The harassment campaign included online threats, sending employees to surveil the couple’s home, and mailing them disturbing objects—including live spiders and cockroaches, a bloody pig mask, and a book on recovering from the death of a spouse.

The Justice Department charged eBay with obstruction of justice, witness tampering, stalking through interstate travel, and stalking through online communication. eBay’s former security director James Baugh and former director of global resiliency David Harville are both serving jail time for their roles in the scheme.

[…]

The criminal activity seems to have started at the top of the company. In 2019, Ina Steiner published an article on the couple’s newsletter EcommerceBytes discussing a lawsuit eBay brought against Amazon. Half an hour later, eBay’s then-CEO Devin Wenig sent another executive a message saying: “If you are ever going to take her down…now is the time,” according to court documents. The message was forwarded to Baugh, who responded that Steiner was a “biased troll who needs to get BURNED DOWN.”

Wenig, who resigned later that year, denied any knowledge of the criminal activity and wasn’t charged with a crime. The Steiners are currently suing Wenig for his role in the campaign to “intimidate, threaten to kill, torture, terrorize, stalk and silence them.”

[…]

A total of seven eBay employees and contractors have been convicted for their involvement in stalking and harassing the Steiners, according to the Department of Justice. In addition to Baugh and Harville, the list includes Stephanie Popp and Philip Cooke, who were both sentenced to jail time in 2022. Stephanie Stockwell and Veronica Zea were each sentenced to one year of home confinement that same year. Brian Gilbert pleaded guilty and is currently awaiting sentencing.

Source: eBay Sent Critics a Bloody Pig Mask. Now It’s Paying a $3 Million Fine

Drivers would prefer to buy a low-tech car than one that shares their data

According to a survey of 2,000 Americans conducted by Kaspersky in November and published this week, 72 percent of drivers are uncomfortable with automakers sharing their data with advertisers, insurance companies, subscription services, and other third-party outfits. Specifically, 37.3 percent of those polled are “very uncomfortable” with this data sharing, and 34.5 percent are “somewhat uncomfortable.”

However, only 28 percent of the total respondents say they have any idea what kind of data their car is collecting. Spoiler alert: It’s potentially all the data. An earlier Mozilla Foundation investigation, which assessed the privacy policies and practices of 25 automakers, gave every single one a failing grade.

In Moz’s September Privacy Not Included report, the org warned that car manufacturers aren’t only potentially collecting and selling things like location history, driving habits and in-car browser histories. Some connected cars may also track drivers’ sexual activity, immigration status, race, facial expressions, weight, health, and even genetic information, if that information becomes available.

Back to the Kaspersky survey: 87 percent said automakers should be required to delete their data upon request. Depending on where you live, and thus the privacy law you’re under, the manufacturers may be obligated to do so.

Oddly, while motorists are worried about their cars sharing their data with third parties, they don’t seem that concerned about their vehicles snooping on them in the first place.

Less than half (41.8 percent) of respondents said they are worried about their vehicle’s sensors, infotainment system, cameras, microphones, and other connected apps and services might be collecting their personal data. And 80 percent of respondents pair their phone with their car anyway, allowing data and details of activities to be exchanged between apps and the vehicle and potentially its manufacturer.

This echoes another survey published this week that found many drivers are willing to trade their personal data and privacy for driver personalization — things like seat, mirror, and entertainment preferences (43 percent) — and better insurance rates (67 percent).

The study also surveyed 2,000 American drivers to come up with these numbers and found that while most drivers (68 percent) don’t mind automakers collecting their personal data, only five percent believe this surveillance should be unrestricted, and 63 percent said it should be on an opt-in basis.

Perhaps it’s time for vehicle makers to take note

Source: Surveyed drivers prefer low-tech cars over data-sharing ones • The Register

Also, we want buttons back too please.

Apple knew AirDrop users could be identified and tracked as early as 2019. Still not fixed.

a shadowy spy looking at people using airdrop on a subway stationSecurity researchers warned Apple as early as 2019 about vulnerabilities in its AirDrop wireless sharing function that Chinese authorities claim they recently used to track down users of the feature, the researchers told CNN, in a case that experts say has sweeping implications for global privacy.

The Chinese government’s actions targeting a tool that Apple customers around the world use to share photos and documents — and Apple’s apparent inaction to address the flaws — revive longstanding concerns by US lawmakers and privacy advocates about Apple’s relationship with China and about authoritarian regimes’ ability to twist US tech products to their own ends.

[…]

A Chinese tech firm, Beijing-based Wangshendongjian Technology, was able to compromise AirDrop to identify users on the Beijing subway accused of sharing “inappropriate information,” judicial authorities in Beijing said this week.

[..]

A group of Germany-based researchers at the Technical University of Darmstadt, who first discovered the flaws in 2019, told CNN Thursday they had confirmation Apple received their original report at the time but that the company appears not to have acted on the findings. The same group published a proposed fix for the issue in 2021, but Apple appears not to have implemented it, the researchers said.

[…]

Chinese authorities claim they exploited the vulnerabilities by collecting some of the basic identifying information that must be transferred between two Apple devices when they use AirDrop — data including device names, email addresses and phone numbers.

Ordinarily, this information is scrambled for privacy reasons. But, according to a separate 2021 analysis of the Darmstadt research by the UK-based cybersecurity firm Sophos, Apple appeared not to have taken the extra precaution of adding bogus data to the mix to further randomize the results — a process known as “salting.”

[…]

One reason Chinese officials may have wanted their exploit known, said Ismail, is that it could scare dissidents away from using AirDrop.

And now that the Beijing authorities have announced it exploited the vulnerability, Apple may face retaliation from Chinese authorities if the tech firm tries to fix the issue, multiple experts said.

China is the largest foreign market for Apple’s products, with sales there representing about a fifth of the company’s total revenue in 2022

[…]

Source: Apple knew AirDrop users could be identified and tracked as early as 2019, researchers say | CNN Business

The C SEED Unfolding TV

[…] The C SEED N1 TV unveiled at CES 2024 is now making global waves. This revolutionary device boasts a vivid 4K resolution, 165, 137, or 103-inch Micro LED screen size, and 180-degree rotation- and it’s just the tip of the iceberg.

[…]

this is the first unfolding TV for indoors.

[,,,]

Source: The C SEED N1 TV: Unfolding the Future of Television Technology | by Jeffrey Clos | Jan, 2024 | Medium

C SEED leads the way with the patented game-changing Adaptive Gap Calibration system: AGC is an automatic distance measuring and calibration system that creates totally seamless foldable 4K/8K TV surfaces, free from any visible gaps. High-resolution sensors detect potential offsets between the folding TV wings, measuring fractions of millimeters and autonomously calibrating the corresponding LEDs´ specific brightness to render gaps invisible. C SEED´s AGC technology guarantees the perfectly seamless indoor TV experience.

Source: C SEED M1 4K 165, 137 & 103 TV

Augmental Mouthpad – bluetooth tongue operated trackpad for your phone, PC, fits like a mouth roof retainer

 

The MouthPad is a tongue-driven interface that controls your computer, smartphone, or tablet via Bluetooth. Virtually invisible to the world, but always available to you, it is positioned across the roof of your mouth to put all of the power of a conventional touchpad at the tip of your tongue

 

Source: Augmental – Home

Interesting concept

US wants private sector AI exempt from Human Rights laws. EU pushes back.

[…]

The Council of Europe, an international human rights body with 46 member countries, set up the Committee on Artificial Intelligence at the beginning of 2022 to develop the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law.

[…]

The most consequential pending issue regards the scope of the convention. In June, Euractiv revealed how the United States, which is participating as an observer country, was pushing for exempting companies by default, leaving it up to the signatory countries to decide whether to ‘opt-in’ the private sector.

[…]

“The Union should not agree with the alternative proposal(s) that limit the scope of the convention to activities within the lifecycle of artificial intelligence systems by or on behalf of a Party or allow application to the private sector only via an additional protocol or voluntary declarations by the Parties (opt-in),” reads an information note from the Commission, obtained by Euractiv.

The document notes that these proposals would limit the treaty’s scope by default, “thus diminishing its value and sending a wrong political message that human rights in the private field do not merit the same protection.”

The EU executive notes how this approach would contradict international law that requires the respect of human rights by private entities

[…]

During the AI Act discussion, one hot debate was around a national security exemption France has been pushing for in the context of the AI convention.

In this regard, the Commission is pushing for an explicit exclusion of AI systems exclusively developed for national security, military and defence purposes in a manner that is consistent with the EU’s AI law.

[…]

Brussels does not seem to have any appetite for the AI treaty to go beyond the AI Act, even on matters where there is not necessarily a conflict, and the convention could have been more ambitious.

A complete overlap of the international treaty with the EU regulation is not a given since the former is meant to protect human rights, while the latter is merely intended to harmonise the EU market rules following a traditional product safety blueprint.

[…]

Similarly, since the AI Act bans specific applications like social scoring deemed to pose an unacceptable risk, the Commission is pushing for extending these prohibitions at the international level via a moratorium or a ban as this would “increase the added value of the convention”.

The only significant exception where the EU executive seems keen to go beyond the AI Act (but still in line with Union law) is in supporting a provision that protects whistle-blowers in the implementation of the convention – one that the UK, Canada and Estonia have opposed.

Source: EU prepares to push back on private sector carve-out from international AI treaty – Euractiv

Airlines United and Alaska find loose bolts on Boeing 737 Max 9 planes after window blowout

Alaska Airlines and United found loose parts on multiple 737 Max 9 aircraft, they have said, referring to the Boeing model grounded after a panel blew off an Alaska Airlines-operated plane mid-flight over the weekend.

The industry publication Air Current reported that United found discrepant bolts on other parts on at least five panels that were being inspected following the accident. The US Federal Aviation Administration (FAA) and Boeing declined to comment.

“Since we began preliminary inspections on Saturday, we have found instances that appear to relate to installation issues in the door plug. For example, bolts that needed additional tightening. These findings will be remedied by our tech ops team to safely return the aircraft to service,” United said in a statement.

A cabin panel on a brand-new Alaska Airlines 737 Max blew out on Friday at 16,000ft, forcing the plane to make an emergency landing shortly after its takeoff from Portland, Oregon. No serious injuries were reported.

[…]

On Monday evening, Alaska Airlines released a statement indicating that maintenance technicians had found issues when inspecting their 737 Max 9 fleet. “Initial reports from our technicians indicate some loose hardware was visible on some aircraft,” the statement said.

[…]

Source: Airlines United and Alaska find loose bolts on Boeing 737 Max 9 planes | Air transport | The Guardian

HP sued (again) for blocking third-party ink from printers via security updates

HP has used its “Dynamic Security” firmware updates to “create a monopoly” of replacement printer ink cartridges, a lawsuit filed against the company on January 5 claims. The lawsuit, which is seeking class-action certification, represents yet another form of litigation against HP for bricking printers when they try to use ink that doesn’t bear an HP logo.

The lawsuit (PDF), which was filed in US District Court in the Northern District of Illinois, names 11 plaintiffs and seeks an injunction against HP requiring the company to disable its printer firmware updates from preventing the use of non-HP branded ink. The lawsuit also seeks monetary damages greater than $5,000,000 and a trial by jury.

The lawsuit focuses on HP printer firmware updates issued in late 2022 and early 2023 that left users seeing this message on their printers when they tried to print with non-HP ink:

The lawsuit cites this pop-up message users saw.
Enlarge / The lawsuit cites this pop-up message users saw.

HP was wrong to issue a firmware update affecting printer functionality, and users were not notified that accepting firmware updates “could damage any features of the printer,” the lawsuit says. The lawsuit also questions HP’s practice of encouraging people to register their printers and then quietly releasing updates that change the printers’ functionality. Additionally, the lawsuit highlights the fact that the use of non-HP ink cartridges doesn’t break HP’s printer warranty.

The filing reads:

… it is not practical or economically rational to purchase a new printer in order to avoid purchasing HP replacement ink cartridges. Therefore, once consumers purchase their printers, the Dynamic Security firmware updates lock them into purchasing HP-branded ink.

HP is proud of its strategy of locking in printer customers. Last month, HP CFO Marie Myers praised the company’s movement from transactional models to forcing customers into continuous buys through offerings like Instant Ink, HP’s monthly ink subscription program.

“We absolutely see when you move a customer from that pure transactional model … whether it’s [to] Instant Ink, plus adding on that paper, we sort of see a 20 percent uplift on the value of that customer because you’re locking that person, committing to a longer-term relationship,” Myers said, as quoted by The Register.

[…]

The lawsuit accuses HP of raising prices on its ink “in the same time period” that it issued its late 2022 and early 2023 firmware updates, which “create[d] a monopoly in the aftermarket for replacement cartridges, permitting [HP] to raise prices without fear of being undercut by competitors.

[…]

HP’s decision to use firmware updates to brick printers using non-HP ink has landed it in litigation numerous times since Dynamic Security debuted in 2016. While the recently filed case is still in its early stages, it’s another example of how disgruntled users have become with HP seizing control over the type of ink that customers insert into hardware they own.

For example, HP agreed to pay $1.5 million in 2019 to settle a class-action case in California about Dynamic Security.

Overseas, HP paid European customers $1.35 million for Dynamic Security. It also paid a 10,000,000-euro fine to the Italian Antitrust Authority in 2020 over the practice and agreed to pay approximately AUD$50 each to Australian customers in 2018.

In addition to the lawsuit filed earlier this month, HP is facing a lawsuit filed in California in 2020 over an alleged failure to disclose information about Dynamic Security. As noted by Reuters, in December, a Northern District of California judge ruled (PDF) that the lawsuit may not result in monetary rewards, but plaintiffs may seek an injunction against the practice.

HP has also been fighting a lawsuit complaining about some of its printers refusing to scan and/or fax without HP ink loaded into the device, even though ink isn’t required to scan or fax a document. (This is something other printer companies are guilty of, too).

Despite already enduring payouts regarding Dynamic Security and calls for HP printers to be ousted from the Electronic Product Environmental Assessment Tool (EPEAT) registry, HP seems committed to using firmware updates to try to control how people use their own printers.

[…]

Source: HP sued (again) for blocking third-party ink from printers, accused of monopoly | Ars Technica

Text-to-3D model startup Luma raises $43M in latest round

Luma, a generative AI startup building software that transforms text descriptions to corresponding 3D models, just raised $43 million (£34 million) in a series-B funding round led by Andreesen Horowitz, Nvidia, and others.

Founded in 2021 by CEO Amit Jain, a former systems engineer working on computer vision at Apple, and CTO Alex Yu, a graduate student from the University of California, Berkeley, Luma AI develops machine-learning software that goes a step beyond what we’ve seen from most existing generative neural networks.

Unlike text-to-image models that emit flat bitmaps of digital art, Luma uses AI to create from photos, videos, or text descriptions three-dimensional models of objects that can be downloaded, manipulated, edited, and rendered as needed.

The upstart, based in Palo Alto, California, has already launched this technology as an app called Genie – available via the web, as the Luma iOS app, and via Discord – which is capable of converting images and video into 3D scenes or producing 3D models of user-described objects. These machine-made models can be previewed on screen and exported to art packages like Blender, popular game engines like Unreal or Unity, and other tools for further use.

Screenshot of Luma's Genie creating a vulture holding a cup of coffee

Screenshot of Genie’s attempt at creating a vulture holding a cup of coffee for us … This was generated from the simple prompt: a vulture with a cup of coffee. Click to enlarge

[…]

Luma says it uses various proprietary computer vision techniques, from image segmentation to meshing, to generate these 3D models from footage and descriptions. The models could well end up being used in video games, virtual reality applications, simulations, or robotics testing.

Some folks may find this technology rather useful if they have an idea for something involving 3D graphics or a 3D scene, and lack the artist talent or skill to create the necessary models.

[…]

We toyed with the system and found Genie’s output looked kinda cute but may not be for everyone at this stage; the examples given by the upstart looked better than what we could come up with, perhaps because they were produced via the startup’s paid-for API while we were using the freebie version.

That paid-for interface costs a dollar a pop to construct a model or scene from supplied assets. The biz even makes the point that this is cheaper and faster than relying on a human designer. If you can out-draw Genie, you don’t have anything to worry about right now.

Luma previously raised $20 million in a series-A round led by Amplify Partners, Nventures (Nvidia’s investment arm), and General Catalyst. Other investors included Matrix Partners, South Park Commons, and Remote First Capital. After raising a total of over $70 million so far, it has a valuation estimated between $200 million and $300 million. ®

Source: Text-to-3D model startup Luma raises $43M in latest round • The Register

Swatting a cancer hospital’s patients after hack is now a thing

After intruders broke into Seattle’s Fred Hutchinson Cancer Center’s IT network in November and stole medical records – everything from Social Security numbers to diagnoses and lab results – miscreants threatened to turn on the patients themselves directly.

The idea being, it seems, that those patients and the media coverage from any swatting will put pressure on the US hospital to pay up and end the extortion. Other crews do similar when attacking IT service provider: they don’t just extort the suppliers, they also threaten or further extort customers of those providers.

[…]

The cancer center, which operates more than 10 clinics in Washington’s Puget Sound region, declined to answer additional comments about the threats.

Another health network in Oklahoma — Integris Health, which operates a network of 15 hospitals and 43 clinics — last month notified patients about a similar “cyber event” in which criminals may have accessed personal data. Shortly after, some of these patients reported receiving emails from miscreants threatening to sell their information on the dark web.

[…]

Sam Rubin, VP of Unit 42 Consulting at Palo Alto Networks, told The Register his team hadn’t seen any swatting attempts by extortion crews in 2023, though the shift in tactics seems likely.

“But I’m not surprised at all,” he added, about the reports of Seattle cancer patients potentially receiving these types of threats.

“If you look over the past couple of years, we’ve seen this continuing evolution of escalating extortion tactics,” Rubin said. “If you go back in time, it was just encryption.”

Over the past year, Unit 42 has seen cybercriminals send threatening texts to the spouse of a CEO whose organization was being extorted, Rubin added, again piling on the pressure for payment. The consulting and incident response unit has also witnessed miscreants sending flowers to a victim company’s executive team, and issuing ransom demands via printers connected to the affected firm’s network.

“We had another one where the victim organization decided not to pay, but then the ransomware actors went on to harass customers of that organization,”

[…]

Meanwhile, ransomware attacks against critical infrastructure including hospitals become more frequent. Emsisoft reported 46 infections against US hospitals networks last year alone, up from 25 in 2022. In total, at least 141 hospitals were infected, and at least 32 of the 46 networks had data — including protected health information — stolen.

It’s bad enough that these attacks have diverted ambulances and postponed critical care for patients, and now the criminals are inflicting even more pain on people. Last year this included leaking breast cancer patients’ nudes. Swatting seems to be the next, albeit abhorrent, step.

Source: Swatting: The new normal in ransomware extortion tactics • The Register

Samsung debuts transparent MicroLED screen

Samsung showcased its transparent MicroLED display side-by-side next to transparent OLED and transparent LCD models to really highlight the differences between the tech. Compared to the others, not only was the MicroLED panel significantly brighter, it also featured a completely frameless design and a more transparent glass panel that made it easier to see objects behind it.

A side view of what Samsung is calling the world's first transparent micro LED display.
Photo by Sam Rutherford/Engadget

In person, the effect Samsung’s transparent micro OLED displays have is hard to describe, as content almost looks like a hologram as it floats in mid-air. The demo unit was freestanding and measured only about a centimeter thick, which adds even more to the illusion of a floating screen. Additionally, because of micro LEDs high pixel density, images also looked incredibly sharp.

[…]

The bad news is that with Samsung’s current crop of non-transparent MicroLED TVs currently costing $150,000 for a 110-inch model, it’s going to be a decently long time until these new displays become anything close to affordable.

Source: Samsung debuts the world’s first transparent MicroLED screen at CES 2024

LG has a Fully Transparent TV

LG announced a new transparent TV at the Consumer Electronics Show in Las Vegas this week. Gizmodo’s staff got to check it out in person, and it’s gorgeous. LG claims this is the world’s first wireless transparent OLED TV and is calling it the Signature OLED T (T for transparent).

The OLED T is merely a transparent panel that plays your content without invading your space with a large, black, obtrusive screen. LG argues that this will help create an illusion of your room looking larger than it would with a regular screen. And in our teams brief experience with the product, that’s true. The sense of openness that would come from not having a huge, dark blob in the room is one of the coolest things about this TV.

The LG OLED T is a massive 77 inches. But when it’s turned off, it simply blends with the environment and makes you forget it’s even there. In fact, that’s one of the reasons why you can place it anywhere you want, unlike a traditional TV that typically has to go in front of a wall. The OLED T can even be placed in front of a window without obstructing your view. The TV is fully wireless, so you don’t have to worry about sockets, either. The Zero Connect Box that the TV ships with also doesn’t need any wires between itself and the screen.

[…]

As for pricing, all LG told Gizmodo was that it will be “very expensive”.

Source: LG Just Announced a Fully Transparent TV

Biophotons: Are lentils communicating using quantum light messages?

[…]

Curceanu hopes the apparatus and methods of nuclear physics can solve the century-old mystery of why lentils – and other organisms too – constantly emit an extremely weak dribble of photons, or particles of light. Some reckon these “biophotons” are of no consequence. Others insist they are a subtle form of lentil communication. Curceanu leans towards the latter camp – and she has a hunch that the pulses between the pulses might even contain secret quantum signals. “These are only the first steps, but it looks extremely interesting,” she says.

There are already hints that living things make use of quantum phenomena, with inconclusive evidence that they feature in photosynthesis and the way birds navigate, among other things. But lentils, not known for their complex behaviour, would be the most startling example yet of quantum biology, says Michal Cifra at the Czech Academy of Sciences in Prague. “It would be amazing,” says Cifra. “If it’s true.” Since so many organisms emit biophotons, such a discovery might indicate that quantum effects are ubiquitous in nature.

Biophotons

Biophotons have had scientists stumped for precisely a century. In 1923, biologist Alexander Gurwitsch was studying how plant cells divide by placing onion roots near each other. The closer the roots were, the more cell division occurred, suggesting there was some signal alerting the roots to their neighbour’s presence.

[…]

To tease out how the onion roots were signalling, Gurwitsch repeated the experiment with all manner of physical barriers between the roots. Wood, metal, glass and even gelatine dampened cell division to the same level seen in single onion roots. But, to Gurwitsch’s surprise, a quartz divider had no effect. Compared to glass, quartz allows far more ultraviolet rays to pass through. Some kind of weak emission of UV radiation, he concluded, must be responsible.

[…]

Living organisms have long been known to communicate using light. Jellyfish, mushrooms and fireflies, to name just a few, glow or emit bright flashes to ward off enemies or attract a mate. But these obvious signals, known as bioluminescence, are different to the effect Gurwitsch had unearthed. Biophotons are “a very low-intensity light, not visible to the naked eye”, says Curceanu’s collaborator Maurizio Benfatto. In fact, biophotons were so weak that it took until 1954 to develop equipment sensitive enough to decisively confirm Gurwitsch’s idea.

Since then, dozens of research groups have reported cases of biophoton emission having a useful function in plants and even animals. Like onion roots, yeast cells are known to influence the growth rate of their neighbours. And in 2022, Zsolt PÓnya and Katalin Somfalvi-TÓth at the University of Kaposvár in Hungary observed biophotons being emitted by sunflowers when they were put under stress, which the researchers hoped to use to precisely monitor these crops. Elsewhere, a review carried out by Roeland Van Wijk and Eduard Van Wijk, now at the research company MELUNA in the Netherlands, suggested that biophotons may play a role in various human health conditions, from ageing to acne.

There is a simple explanation for how biophotons are created, too. During normal metabolism, chemical reactions in cells end up converting biomolecules to what researchers called an excited state, where electrons are elevated to higher energy levels. Those electrons then naturally drop to their ground state and emit a photon in the process. Because germinating seeds, like lentils, burn energy quickly to grow, they emit more biophotons.

Today, no one doubts that biophotons exist. Rather, the dispute is over whether lentils and other organisms have harnessed biophotons in a useful way.

[…]

We know that plants communicate using chemicals and sometimes even emit ultrasonic squeaks when stressed. This allows them to control their growth, warn each other about invading insects and attract pollinators. We also know they have ways of detecting and responding to photons in the form of regular sunlight. “Biological systems can detect photons and have feedback loops based on that,”

[…]

Curceanu and Benfatto are hoping that the application of serious physics equipment to this problem could finally let us eavesdrop on the legume’s secrets. They typically use supersensitive detectors to probe the foundations of reality. Now, they are applying these to a box of 75 lentil seeds – they need that many because if they used any fewer, the biophoton signals would be too weak.

[…]

Years ago, Benfatto came across a paper on biophotons and noticed there appeared to be patterns in the way they were produced. The intensity would swell, then fall away, almost like music. This gave him the idea of applying a method from physics called diffusion entropy analysis to investigate these patterns. The method provides a means of characterising the mathematical structures that underlie complex patterns. Imagine comparing a simple drumbeat with the melody of a pop song, for example – the method Benfatto wanted to apply could quantify the complexity embodied in each.

To apply this to the lentils, Benfatto, Curceanu and their colleagues put their seeds in a black box that shielded them from interference. Outside the box, they mounted an instrument capable of detecting single biophotons. They also had rotating filters that allowed them to detect photons with different wavelengths. All that remained was to set the lentils growing. “We add water and then we wait,” says Benfatto.

In 2021, they unveiled their initial findings. It turned out that the biophotons’ signals changed significantly during the lentils’ germination. During the first phase, the photons were emitted in a pattern that repeatedly reset, like a piece of music changing tempo. Then, during the second phase, the emissions took the form of another kind of complex pattern called fractional Brownian motion.

 

Photograph provided by Catalina Oana Curceanu Catalina.Curceanu@lnf.infn.it showing the experimental setup used for the research paper: Biophotons and Emergence of Quantum Coherence--A Diffusion Entropy Analysis

Are these germinating lentils communicating in quantum code?

Catalina Curceanu

 

The fact that the lentils’ biophoton emissions aren’t random is an indication that they could be communicating, says Benfatto. And that’s not all. Tantalisingly, the complexity in the second phase of the emissions is mathematically related to the equations of quantum mechanics. For this reason, Benfatto says his team’s work hints that signals displaying quantum coherence could have a role in directing lentil germination.

[…]

Part of the problem with designing experiments like these is that we don’t really know what quantum mechanical effects in living organisms look like. Any quantum effects discovered in lentils and other organisms would be “very different to textbook quantum mechanics”, says Scholes.

[…]

so far, the evidence for quantum lentils is sketchy. Still, he is pushing ahead with a new experimental design that makes the signal-to-noise ratio 100 times better. If you want to earwig on the clandestine whispers of these seeds, it might just help to get rid of their noisy neighbours, which is why he will study one germinating lentil at a time.

Source: Biophotons: Are lentils sending secret quantum messages? | New Scientist

Google password resets not enough to stop malware that recreates login tokens

A zero-day exploit of Google account security was first teased by a cybercriminal known as “PRISMA” in October 2023, boasting that the technique could be used to log back into a victim’s account even after the password is changed. It can also be used to generate new session tokens to regain access to victims’ emails, cloud storage, and more as necessary.

Since then, developers of info-stealer malware – primarily targeting Windows, it seems – have steadily implemented the exploit in their code. The total number of known malware families that abuse the vulnerability stands at six, including Lumma and Rhadamanthys, while Eternity Stealer is also working on an update to release in the near future.

They’re called info stealers because once they’re running on some poor sap’s computer, they go to work finding sensitive information – such as remote desktop credentials, website cookies, and cryptowallets – on the local host and leaking them to remote servers run by miscreants.

Eggheads at CloudSEK say they found the root of the Google account exploit to be in the undocumented Google OAuth endpoint “MultiLogin.”

The exploit revolves around stealing victims’ session tokens. That is to say, malware first infects a person’s PC – typically via a malicious spam or a dodgy download, etc – and then scours the machine for, among other things, web browser session cookies that can be used to log into accounts.

Those session tokens are then exfiltrated to the malware’s operators to enter and hijack those accounts. It turns out that these tokens can still be used to login even if the user realizes they’ve been compromised and change their Google password.

Here’s an important part: It appears users who’ve had their cookies stolen should log out entirely, and thus invalidate their session tokens, to prevent exploitation.

[…]

Reverse engineering the info-stealer malware revealed that the account IDs and auth-login tokens from logged-in Google accounts are taken from the token_service table of WebData in Chrome.

This table contains two columns crucial to the exploit’s functionality: service (contains a GAIA ID) and encrypted_token. The latter is decrypted using a key stored in Chrome’s Local State file, which resides in the UserData directory.

The stolen token:GAIA ID pairs can then be used together with MultiLogin to continually regenerate Google service cookies even after passwords have been reset, and those can be used to log in.

[…]

Google has confirmed that if you’ve had your session tokens stolen by local malware, don’t just change your password: log out to invalidate those cookies, and/or revoke access to compromised devices.

[…]

Source: Google password resets not enough to stop this malware • The Register

23andMe tells victims it’s their fault that their data was breached. DNA data, it turns out, is extremely sensitive!

Facing more than 30 lawsuits from victims of its massive data breach, 23andMe is now deflecting the blame to the victims themselves in an attempt to absolve itself from any responsibility, according to a letter sent to a group of victims seen by TechCrunch.

“Rather than acknowledge its role in this data security disaster, 23andMe has apparently decided to leave its customers out to dry while downplaying the seriousness of these events,” Hassan Zavareei, one of the lawyers representing the victims who received the letter from 23andMe, told TechCrunch in an email.

In December, 23andMe admitted that hackers had stolen the genetic and ancestry data of 6.9 million users, nearly half of all its customers.

The data breach started with hackers accessing only around 14,000 user accounts. The hackers broke into this first set of victims by brute-forcing accounts with passwords that were known to be associated with the targeted customers, a technique known as credential stuffing.

From these 14,000 initial victims, however, the hackers were able to then access the personal data of the other 6.9 million victims because they had opted-in to 23andMe’s DNA Relatives feature. This optional feature allows customers to automatically share some of their data with people who are considered their relatives on the platform.

In other words, by hacking into only 14,000 customers’ accounts, the hackers subsequently scraped personal data of another 6.9 million customers whose accounts were not directly hacked.

But in a letter sent to a group of hundreds of 23andMe users who are now suing the company, 23andMe said that “users negligently recycled and failed to update their passwords following these past security incidents, which are unrelated to 23andMe.”

“Therefore, the incident was not a result of 23andMe’s alleged failure to maintain reasonable security measures,” the letter reads.

Zavareei said that 23andMe is “shamelessly” blaming the victims of the data breach.

[…]

“The breach impacted millions of consumers whose data was exposed through the DNA Relatives feature on 23andMe’s platform, not because they used recycled passwords. Of those millions, only a few thousand accounts were compromised due to credential stuffing. 23andMe’s attempt to shirk responsibility by blaming its customers does nothing for these millions of consumers whose data was compromised through no fault of their own whatsoever,” said Zavareei.

[…]

In an attempt to pre-empt the inevitable class action lawsuits and mass arbitration claims, 23andMe changed its terms of service to make it more difficult for victims to band together when filing a legal claim against the company. Lawyers with experience representing data breach victims told TechCrunch that the changes were “cynical,” “self-serving” and “a desperate attempt” to protect itself and deter customers from going after the company.

Clearly, the changes didn’t stop what is now a flurry of class action lawsuits.

Source: 23andMe tells victims it’s their fault that their data was breached | TechCrunch

Twitch Is Being American Strange and Bans Implied Nakedness In Response To ‘Nudity Meta’

As December 2023 was underway, some streamers cleverly thought to play around with Twitch’s restrictions around nudity, broadcasting in such a fashion that implied they were completely naked on camera. Twitch, in response, began banning folks before shifting gears to allow various forms of “artistic nudity” to proliferate on the platform. However, after immediately rescinding the decision and expressing that being naked while livestreaming is a no-no, the company is now making it clear that implied nudity is also forbidden, and that anyone who tries to circumvent the rules will face disciplinary action.

In a January 3 blog post, the company laid out the new guidelines regarding implied nudity on the platform, which is now prohibited effective immediately. Anyone who shows skin that the rules deem should be covered—think genitals, nipples “for those who present as women,” and the like—will face “an enforcement action,” though Twitch didn’t specify what that means. So, if you’re wearing sheer or partially see-through clothing, or use black bars to cover your private parts, then you’re more than likely to get hit with some sort of discipline.

“We don’t permit streamers to be fully or partially nude, including exposing genitals or buttocks. Nor do we permit streamers to imply or suggest that they are fully or partially nude, including, but not limited to, covering breasts or genitals with objects or censor bars,” the company said in the blog post. “We do not permit the visible outline of genitals, even when covered. Broadcasting nude or partially nude minors is always prohibited, regardless of context. For those who present as women, we ask that you cover your nipples and do not expose underbust. Cleavage is unrestricted as long as these coverage requirements are met and it is clear that the streamer is wearing clothing. For all streamers, you must cover the area extending from your hips to the bottom of your pelvis and buttocks.”

[…]

At the beginning of December, some streamers, including Morgpie and LivStixs, began broadcasting in what appeared to be the complete nude. In actuality, these content creators were implying nudity by positioning their cameras at the right angle so as to show plenty of unobscured cleavage but keep nipples out of sight. “Artistic nudity” is what it was called and, as the meta took over the platform, Twitch conceded, allowing such nakedness to proliferate all over livestreams.

[…]

Company CEO Dan Clancy said on December 15 that “depictions of real or fictional nudity won’t be allowed on Twitch, regardless of the medium.” He also apologized for the confusion this whole situation has caused, saying that part of Twitch’s job is “to make adjustments that serve the community.” So be careful, streamers. If you show up nude on the platform, Twitch will come for you.

Source: Twitch Bans Implied Nakedness In Response To ‘Nudity Meta’

What is wrong with these people?! If you don’t want to see (almost) nudity, you can always just change channel!

Generative AI Will Be A Huge Boon For The Public Domain, Unless Copyright Blocks It

two people holding hands watching a pc screen. On the screen is a robot painting a digitised Bob Ross paintingA year ago, I noted that many of Walled Culture’s illustrations were being produced using generative AI. During that time, AI has developed rapidly. For example, in the field of images, OpenAI has introduced DALL-E 3 in ChatGPT:

When prompted with an idea, ChatGPT will automatically generate tailored, detailed prompts for DALL·E 3 that bring your idea to life. If you like a particular image, but it’s not quite right, you can ask ChatGPT to make tweaks with just a few words.

Ars Technica has written a good intro to the new DALL-E 3, describing it as “a wake-up call for visual artists” in terms of its advanced capabilities. The article naturally touches on the current situation regarding copyright for these creations:

In the United States, purely AI-generated art cannot currently be copyrighted and exists in the public domain. It’s not cut and dried, though, because the US Copyright Office has supported the idea of allowing copyright protection for AI-generated artwork that has been appreciably altered by humans or incorporated into a larger work.

The article goes on to explore an interesting aspect of that situation:

there’s suddenly a huge new pool of public domain media to work with, and it’s often “open source”—as in, many people share the prompts and recipes used to create the artworks so that others can replicate and build on them. That spirit of sharing has been behind the popularity of the Midjourney community on Discord, for example, where people typically freely see each other’s prompts.

When several mesmerizing AI-generated spiral images went viral in September, the AI art community on Reddit quickly built off of the trend since the originator detailed his workflow publicly. People created their own variations and simplified the tools used in creating the optical illusions. It was a good example of what the future of an “open source creative media” or “open source generative media” landscape might look like (to play with a few terms).

There are two important points there. First, that the current, admittedly tentative, status of generative AI creations as being outside the copyright system means that many of them, perhaps most, are available for anyone to use in any way. Generative AI could drive a massive expansion of the public domain, acting as a welcome antidote to constant attempts to enclose the public domain by re-imposing copyright on older works – for example, as attempted by galleries and museums.

The second point is that without the shackles of copyright, these creations can form the basis of collaborative works among artists willing to embrace that approach, and to work with this new technology in new ways. That’s a really exciting possibility that has been hard to implement without recourse to legal approaches like Creative Commons. Although the intention there is laudable, most people don’t really want to worry about the finer points of licensing – not least out of fear that they might get it wrong, and be sued by the famously litigious copyright industry.

A situation in which generative AI creations are unequivocally in the public domain could unleash a flood of pent-up creativity. Unfortunately, as the Ars Technica article rightly points out, the status of AI generated artworks is already slightly unclear. We can expect the copyright world to push hard to exploit that opening, and to demand that everything created by computers should be locked down under copyright for decades, just as human inspiration generally is from the moment it is in a fixed form. Artists should enjoy this new freedom to explore and build on generative AI images while they can – it may not last.

Source: Generative AI Will Be A Huge Boon For The Public Domain, Unless Copyright Blocks It | Techdirt

Developing An App For Reduced-Gravity Flying

You’ve likely heard of the “vomit comet” — an rather graphic nickname for the aircraft used to provide short bursts of near-weightlessness by flying along a parabolic trajectory. They’re used to train astronauts, perform zero-g experiments, and famously let director Ron Howard create the realistic spaceflight scenes for Apollo 13. But you might be surprised to find that, outside of the padding that lines their interior for when the occupants inevitably bump into the walls or ceiling, they aren’t quite as specialized as you might think.

In fact, you can achieve a similar result in a small private aircraft — assuming you’ve got the proper touch on the controls. Which is why [Chaz] has been working on an Android app that assists pilots in finding that sweet spot.

Target trajectory, credit: MikeRun

With his software running, the pilot first puts the plane into a climb, and then noses over and attempts to keep the indicator on the phone’s display green for as long as possible. It’s not easy, but in the video after the break you can see they’re able to pull it off for long enough to get things floating around the cockpit.

 

As [Chaz] explains, the app is basically a G-force indicator with some UI features that are designed to help the pilot keep the plane in the proper attitude to provide the sensation of weightlessness. It takes the values from the phone’s accelerometers, does the appropriate math, and changes the color of the display as the computed G-force approaches 0.

If the pilot is able to bring it under 0.1, the phone will play an audio cue. Though the fact that any loose objects that were in the cockpit will be floating around should also provide a pretty good indicator around this point.

It doesn’t look like [Chaz] is ready to release the application yet, but since it was created with MIT’s App Inventor, the walk-through he provides along with the screenshots from the editor should technically be enough to create it should you free so inclined — no pun intended.

Source: Developing An App For Reduced-Gravity Flying | Hackaday

The NY Times Lawsuit Against OpenAI Would Open Up The NY Times To All Sorts Of Lawsuits Should It Win, shows that if you feed it a URL it can regurgitate what’s on the first parts of that URL

This week the NY Times somehow broke the story of… well, the NY Times suing OpenAI and Microsoft. I wonder who tipped them off. Anyhoo, the lawsuit in many ways is similar to some of the over a dozen lawsuits filed by copyright holders against AI companies. We’ve written about how silly many of these lawsuits are, in that they appear to be written by people who don’t much understand copyright law. And, as we noted, even if courts actually decide in favor of the copyright holders, it’s not like it will turn into any major windfall. All it will do is create another corruptible collection point, while locking in only a few large AI companies who can afford to pay up.

I’ve seen some people arguing that the NY Times lawsuit is somehow “stronger” and more effective than the others, but I honestly don’t see that. Indeed, the NY Times itself seems to think its case is so similar to the ridiculously bad Authors Guild case, that it’s looking to combine the cases.

But while there are some unique aspects to the NY Times case, I’m not sure they are nearly as compelling as the NY Times and its supporters think they are. Indeed, I think if the Times actually wins its case, it would open the Times itself up to some fairly damning lawsuits itself, given its somewhat infamous journalistic practices regarding summarizing other people’s articles without credit. But, we’ll get there.

The Times, in typical NY Times fashion, presents this case as thought the NY Times is the great defender of press freedom, taking this stand to stop the evil interlopers of AI.

Independent journalism is vital to our democracy. It is also increasingly rare and valuable. For more than 170 years, The Times has given the world deeply reported, expert, independent journalism. Times journalists go where the story is, often at great risk and cost, to inform the public about important and pressing issues. They bear witness to conflict and disasters, provide accountability for the use of power, and illuminate truths that would otherwise go unseen. Their essential work is made possible through the efforts of a large and expensive organization that provides legal, security, and operational support, as well as editors who ensure their journalism meets the highest standards of accuracy and fairness. This work has always been important. But within a damaged information ecosystem that is awash in unreliable content, The Times’s journalism provides a service that has grown even more valuable to the public by supplying trustworthy information, news analysis, and commentary

Defendants’ unlawful use of The Times’s work to create artificial intelligence products that compete with it threatens The Times’s ability to provide that service. Defendants’ generative artificial intelligence (“GenAI”) tools rely on large-language models (“LLMs”) that were built by copying and using millions of The Times’s copyrighted news articles, in-depth investigations, opinion pieces, reviews, how-to guides, and more. While Defendants engaged in widescale copying from many sources, they gave Times content particular emphasis when building their LLMs—revealing a preference that recognizes the value of those works. Through Microsoft’s Bing Chat (recently rebranded as “Copilot”) and OpenAI’s ChatGPT, Defendants seek to free-ride on The Times’s massive investment in its journalism by using it to build substitutive products without permission or payment.

As the lawsuit makes clear, this isn’t some high and mighty fight for journalism. It’s a negotiating ploy. The Times admits that it has been trying to get OpenAI to cough up some cash for its training:

For months, The Times has attempted to reach a negotiated agreement with Defendants, in accordance with its history of working productively with large technology platforms to permit the use of its content in new digital products (including the news products developed by Google, Meta, and Apple). The Times’s goal during these negotiations was to ensure it received fair value for the use of its content, facilitate the continuation of a healthy news ecosystem, and help develop GenAI technology in a responsible way that benefits society and supports a well-informed public.

I’m guessing that OpenAI’s decision a few weeks back to pay off media giant Axel Springer to avoid one of these lawsuits, and the failure to negotiate a similar deal (at what is likely a much higher price), resulted in the Times moving forward with the lawsuit.

There are five or six whole pages of puffery about how amazing the NY Times thinks the NY Times is, followed by the laughably stupid claim that generative AI “threatens” the kind of journalism the NY Times produces.

Let me let you in on a little secret: if you think that generative AI can do serious journalism better than a massive organization with a huge number of reporters, then, um, you deserve to go out of business. For all the puffery about the amazing work of the NY Times, this seems to suggest that it can easily be replaced by an auto-complete machine.

In the end, though, the crux of this lawsuit is the same as all the others. It’s a false belief that reading something (whether by human or machine) somehow implicates copyright. This is false. If the courts (or the legislature) decide otherwise, it would upset pretty much all of the history of copyright and create some significant real world problems.

Part of the Times complaint is that OpenAI’s GPT LLM was trained in part with Common Crawl data. Common Crawl is an incredibly useful and important resource that apparently is now coming under attack. It has been building an open repository of the web for people to use, not unlike the Internet Archive, but with a focus on making it accessible to researchers and innovators. Common Crawl is a fantastic resource run by some great people (though the lawsuit here attacks them).

But, again, this is the nature of the internet. It’s why things like Google’s cache and the Internet Archive’s Wayback Machine are so important. These are archives of history that are incredibly important, and have historically been protected by fair use, which the Times is now threatening.

(Notably, just recently, the NY Times was able to get all of its articles excluded from Common Crawl. Otherwise I imagine that they would be a defendant in this case as well).

Either way, so much of the lawsuit is claiming that GPT learning from this data is infringement. And, as we’ve noted repeatedly, reading/processing data is not a right limited by copyright. We’ve already seen this in multiple lawsuits, but this rush of plaintiffs is hoping that maybe judges will be wowed by this newfangled “generative AI” technology into ignoring the basics of copyright law and pretending that there are now rights that simply do not exist.

Now, the one element that appears different in the Times’ lawsuit is that it has a bunch of exhibits that purport to prove how GPT regurgitates Times articles. Exhibit J is getting plenty of attention here, as the NY Times demonstrates how it was able to prompt ChatGPT in such a manner that it basically provided them with direct copies of NY Times articles.

In the complaint, they show this:

Image

At first glance that might look damning. But it’s a lot less damning when you look at the actual prompt in Exhibit J and realize what happened, and how generative AI actually works.

What the Times did is prompt GPT-4 by (1) giving it the URL of the story and then (2) “prompting” it by giving it the headline of the article and the first seven and a half paragraphs of the article, and asking it to continue.

Here’s how the Times describes this:

Each example focuses on a single news article. Examples were produced by breaking the article into two parts. The frst part o f the article is given to GPT-4, and GPT-4 replies by writing its own version of the remainder of the article.

Here’s how it appears in Exhibit J (notably, the prompt was left out of the complaint itself):

Image

If you actually understand how these systems work, the output looking very similar to the original NY Times piece is not so surprising. When you prompt a generative AI system like GPT, you’re giving it a bunch of parameters, which act as conditions and limits on its output. From those constraints, it’s trying to generate the most likely next part of the response. But, by providing it paragraphs upon paragraphs of these articles, the NY Times has effectively constrained GPT to the point that the most probabilistic responses is… very close to the NY Times’ original story.

In other words, by constraining GPT to effectively “recreate this article,” GPT has a very small data set to work off of, meaning that the highest likelihood outcome is going to sound remarkably like the original. If you were to create a much shorter prompt, or introduce further randomness into the process, you’d get a much more random output. But these kinds of prompts effectively tell GPT not to do anything BUT write the same article.

From there, though, the lawsuit gets dumber.

It shows that you can sorta get around the NY Times’ paywall in the most inefficient and unreliable way possible by asking ChatGPT to quote the first few paragraphs in one paragraph chunks.

Image

Of course, quoting individual paragraphs from a news article is almost certainly fair use. And, for what it’s worth, the Times itself admits that this process doesn’t actually return the full article, but a paraphrase of it.

And the lawsuit seems to suggest that merely summarizing articles is itself infringing:

Image

That’s… all factual information summarizing the review? And while the complaint shows that if you then ask for (again, paragraph length) quotes, GPT will give you a few quotes from the article.

And, yes, the complaint literally argues that a generative AI tool can violate copyright when it “summarizes” an article.

The issue here is not so much how GPT is trained, but how the NY Times is constraining the output. That is unrelated to the question of whether or not the reading of these article is fair use or not. The purpose of these LLMs is not to repeat the content that is scanned, but to figure out the probabilistic most likely next token for a given prompt. When the Times constrains the prompts in such a way that the data set is basically one article and one article only… well… that’s what you get.

Elsewhere, the Times again complains about GPT returning factual information that is not subject to copyright law.

Image

But, I mean, if you were to ask anyone the same question, “What does wirecutter recommend for The Best Kitchen Scale,” they’re likely to return you a similar result, and that’s not infringing. It’s a fact that that scale is the one that it recommends. The Times complains that people who do this prompt will avoid clicking on Wirecutter affiliate links, but… um… it has no right to that affiliate income.

I mean, I’ll admit right here that I often research products and look at Wirecutter (and other!) reviews before eventually shopping independently of that research. In other words, I will frequently buy products after reading the recommendations on Wirecutter, but without clicking on an affiliate link. Is the NY Times really trying to suggest that this violates its copyright? Because that’s crazy.

Meanwhile, it’s not clear if the NY Times is mad that it’s accurately recommending stuff or if it’s just… mad. Because later in the complaint, the NY Times says its bad that sometimes GPT recommends the wrong product or makes up a paragraph.

So… the complaint is both that GPT reproduces things too accurately, AND not accurately enough. Which is it?

Anyway, the larger point is that if the NY Times wins, well… the NY Times might find itself on the receiving end of some lawsuits. The NY Times is somewhat infamous in the news world for using other journalists’ work as a starting point and building off of it (frequently without any credit at all). Sometimes this results in an eventual correction, but often it does not.

If the NY Times successfully argues that reading a third party article to help its reporters “learn” about the news before reporting their own version of it is copyright infringement, it might not like how that is turned around by tons of other news organizations against the NY Times. Because I don’t see how there’s any legitimate distinction between OpenAI scanning NY Times articles and NY Times reporters scanning other articles/books/research without first licensing those works as well.

Or, say, what happens if a source for a NY TImes reporter provides them with some copyright-covered work (an article, a book, a photograph, who knows what) that the NY Times does not have a license for? Can the NY Times journalist then produce an article based on that material (along with other research, though much less than OpenAI used in training GPT)?

It seems like (and this happens all too often in the news industry) the NY Times is arguing that it’s okay for its journalists to do this kind of thing because it’s in the business of producing Important Journalism™ whereas anyone else doing the same thing is some damn interloper.

We see this with other copyright disputes and the media industry, or with the ridiculous fight over the hot news doctrine, in which news orgs claimed that they should be the only ones allowed to report on something for a while.

Similarly, I’ll note that even if the NY Times gets some money out of this, don’t expect the actual reporters to see any of it. Remember, this is the same NY Times that once tried to stiff freelance reporters by relicensing their articles to electronic databases without paying them. The Supreme Court didn’t like that. If the NY Times establishes that merely training AI on old articles is a licenseable, copyright-impacting event, will it go back and pay those reporters a piece of whatever change they get? Or nah?

Source: The NY Times Lawsuit Against OpenAI Would Open Up The NY Times To All Sorts Of Lawsuits Should It Win | Techdirt

Two EV models powered by sodium-ion batteries roll off line in China

Two electric vehicle (EV) models powered by sodium-ion batteries have rolled off the production line in China, signaling that the new, lower-cost batteries are closer to being used on a large scale.

A model powered by sodium-ion batteries built by Farasis Energy in partnership with JMEV, an EV brand owned by Jiangling Motors Group, rolled off the assembly line on December 28, according to the battery maker.

The model, based on JMEV’s EV3, has a range of 251 km and is the first all-electric A00-class model powered by sodium-ion batteries to be built by Farasis Energy in collaboration with JMEV.

The JMEV EV3 is a compact, all-electric vehicle with a CLTC range of 301 km and a battery pack capacity of 31.15 kWh for its two lithium-ion battery versions. The starting prices for these two versions are RMB 62,800 ($8,840) and RMB 66,800, respectively.

The model’s sodium battery version starts at RMB 58,800, with a battery pack capacity of 21.4 kWh and a CLTC range of 251 km, according to its specification sheet.

Farasis Energy’s sodium-ion batteries currently in production have energy densities in the range of 140-160 Wh/kg, and the battery cells have passed tests including pin-prick, overcharging, and extrusion, according to the company.

Farasis Energy will launch the second generation of sodium-ion batteries in 2024 with an energy density of 160-180 Wh/kg, it said.

By 2026, the next generation of sodium-ion battery products will have an energy density of 180-200 Wh/kg.

On December 27, battery maker Hina Battery announced that a model powered by sodium-ion batteries, which it jointly built with Anhui Jianghuai Automobile Group Corp (JAC), rolled off the production line.

The model is a new variant of the Yiwei 3, the first model under JAC’s new Yiwei brand, and utilizes Hina Battery’s sodium-ion cylindrical cells.

(Image credit: Hina Battery)

Volume deliveries of the sodium-ion battery-equipped Yiwei model are expected to begin in January 2024, according to Hina Battery.

On February 23, Hina Battery unveiled three sodium-ion battery cell products and announced that it had entered into a partnership with JAC.

Hina Battery and Sehol — a joint venture brand between JAC and Volkswagen Anhui — would jointly build a test vehicle with sodium-ion batteries based on the latter’s Sehol E10X model, according to a statement in February.

The test vehicle’s battery pack has a capacity of 25 kWh and an energy density of 120 Wh/kg. The model has a range of 252 km and supports 3C to 4C fast charging. The battery pack uses cells with an energy density of 140 Wh/kg.

JAC launched its new brand Yiwei (钇为 for in Chinese) on April 12 and made the brand’s first model, the Yiwei 3, available on June 16.

According to information released yesterday by Hina Battery, the two are working together to build a production vehicle powered by sodium-ion batteries based on the Yiwei 3.

Source: Two EV models powered by sodium-ion batteries roll off line in China – CnEVPost

Using Local AI On The Command Line To Rename Images (And More)

We all have a folder full of images whose filenamees resemble line noise. How about renaming those images with the help of a local LLM (large language model) executable on the command line? All that and more is showcased on [Justine Tunney]’s bash one-liners for LLMs, a showcase aimed at giving folks ideas and guidance on using a local (and private) LLM to do actual, useful work.

This is built out from the recent llamafile project, which turns LLMs into single-file executables. This not only makes them more portable and easier to distribute, but the executables are perfectly capable of being called from the command line and sending to standard output like any other UNIX tool. It’s simpler to version control the embedded LLM weights (and therefore their behavior) when it’s all part of the same file as well.

One such tool (the multi-modal LLaVA) is capable of interpreting image content. As an example, we can point it to a local image of the Jolly Wrencher logo using the following command:

llava-v1.5-7b-q4-main.llamafile --image logo.jpg --temp 0 -e -p '### User: The image has...\n### Assistant:'

Which produces the following response:

The image has a black background with a white skull and crossbones symbol.

With a different prompt (“What do you see?” instead of “The image has…”) the LLM even picks out the wrenches, but one can already see that the right pieces exist to do some useful work.

Check out [Justine]’s rename-pictures.sh script, which cleverly evaluates image filenames. If an image’s given filename already looks like readable English (also a job for a local LLM) the image is left alone. Otherwise, the picture is fed to an LLM whose output guides the generation of a new short and descriptive English filename in lowercase, with underscores for spaces.

What about the fact that LLM output isn’t entirely predictable? That’s easy to deal with. [Justine] suggests always calling these tools with the --temp 0 parameter. Setting the temperature to zero makes the model deterministic, ensuring that a same input always yields the same output.

There’s more neat examples on the Bash One-Liners for LLMs that demonstrate different ways to use a local LLM that lives in a single-file executable, so be sure to give it a look and see if you get any new ideas. After all, we have previously shown how automating tasks is almost always worth the time invested.

Source: Using Local AI On The Command Line To Rename Images (And More) | Hackaday

More useful would be to put this information into EXIF data, but it shouldn’t be too tough to tweak the command to do that instead

Novel helmet liner 30 times better at stopping concussions

[…]

Among sportspeople and military vets, traumatic brain injury (TBI) is one of the major causes of permanent disability and death. Injury statistics show that the majority of TBIs, of which concussion is a subtype, are associated with oblique impacts, which subject the brain to a combination of linear and rotational kinetic energy forces and cause shearing of the delicate brain tissue.

To improve their effectiveness, helmets worn by military personnel and sportspeople must employ a liner material that limits both. This is where researchers from the University of Wisconsin-Madison come in. Determined to prevent – or lessen the effect of – TBIs caused by knocks to the body and head, they’ve developed a new lightweight foam material for use as a helmet liner.

[…]

For the current study, Thevamaran built upon his previous research into vertically aligned carbon nanotube (VACNT) foams – carefully arranged layers of carbon cylinders one atom thick – and their exceptional shock-absorbing capabilities. Current helmets attempt to reduce rotational motion by allowing a sliding motion between the wearer’s head and the helmet during impact. However, the researchers say this movement doesn’t dissipate energy in shear and can jam when severely compressed following a blow. Instead, their novel foam doesn’t rely on sliding layers.

Oblique impacts subject the brain to a combination of linear and rotational shear force
Oblique impacts, associated with the majority of TBIs, subject the brain to a combination of linear and rotational shear forces
Maheswaran et al.

VACNT foam sidesteps this shortcoming via its unique deformation mechanism. Under compression, the VACNTs undergo collective sequentially progressive buckling, from increased compliance at low shear strain levels to a stiffening response at high strain levels. The formed compression buckles unfold completely, enabling the VACNT foam to accommodate large shear strains before returning to a near initial state when the load is removed.

The researchers found that at 25% precompression, the foam exhibited almost 30 times higher energy dissipation in shear – up to 50% shear strain – than polyurethane-based elastomeric foams of similar density.

[…]

The study was published in the journal Experimental Mechanics.

Source: University of Wisconsin-Madison

 

Source: Novel helmet liner 30 times better at stopping concussions