The EU Commission’s Alleged CSAM Regulation ‘Experts’ giving them free reign to spy on everyone: can’t be found. OK then.

Everyone who wants client-side scanning to be a thing insists it’s a good idea with no potential downsides. The only hangup, they insist, is tech companies’ unwillingness to implement it. And by “implement,” I mean — in far too many cases — introducing deliberate (and exploitable!) weaknesses in end-to-end encryption.End-to-end encryption only works if both ends are encrypted. Taking the encryption off one side to engage in content scanning makes it half of what it was. And if you get in the business of scanning users’ content for supposed child sexual abuse material (CSAM), governments may start asking you to “scan” for other stuff… like infringing content, terrorist stuff, people talking about crimes, stuff that contradicts the government’s narratives, things political rivals are saying. The list goes on and on.Multiple experts have pointed out how the anti-CSAM efforts preferred by the EU would not only not work, but also subject millions of innocent people to the whims of malicious hackers and malicious governments. Governments also made these same points, finally forcing the EU Commission to back down on its attempt to undermine encryption, if not (practically) outlaw it entirely.The Commission has always claimed its anti-encryption, pro-client-side scanning stance is backed by sound advice given to it by the experts it has consulted. But when asked who was consulted, the EU Commission has refused to answer the question. This is from the Irish Council of Civil Liberties (ICCL), which asked the Commission a simple question, but — like the Superintendent Chalmers referenced in the headline — was summarily rejected. In response to a request for documents pertaining to the decision-making behind the proposed CSAM regulation, the European Commission failed to disclose a list of companies who were consulted about the technical feasibility of detecting CSAM without undermining encryption. This list

Everyone who wants client-side scanning to be a thing insists it’s a good idea with no potential downsides. The only hangup, they insist, is tech companies’ unwillingness to implement it. And by “implement,” I mean — in far too many cases — introducing deliberate (and exploitable!) weaknesses in end-to-end encryption.

End-to-end encryption only works if both ends are encrypted. Taking the encryption off one side to engage in content scanning makes it half of what it was. And if you get in the business of scanning users’ content for supposed child sexual abuse material (CSAM), governments may start asking you to “scan” for other stuff… like infringing content, terrorist stuff, people talking about crimes, stuff that contradicts the government’s narratives, things political rivals are saying. The list goes on and on.

Multiple experts have pointed out how the anti-CSAM efforts preferred by the EU would not only not work, but also subject millions of innocent people to the whims of malicious hackers and malicious governments. Governments also made these same points, finally forcing the EU Commission to back down on its attempt to undermine encryption, if not (practically) outlaw it entirely.

The Commission has always claimed its anti-encryption, pro-client-side scanning stance is backed by sound advice given to it by the experts it has consulted. But when asked who was consulted, the EU Commission has refused to answer the question. This is from the Irish Council of Civil Liberties (ICCL), which asked the Commission a simple question, but — like the Superintendent Chalmers referenced in the headline — was summarily rejected.

In response to a request for documents pertaining to the decision-making behind the proposed CSAM regulation, the European Commission failed to disclose a list of companies who were consulted about the technical feasibility of detecting CSAM without undermining encryption. This list “clearly fell within the scope” of the Irish Council for Civil Liberties’ request. 

If you’re not familiar with the reference, we’ll get you up to speed.

22 Short Films About Springfield is an episode of “The Simpsons” that originally aired in 1996. One particular “film” has become an internet meme legend: the one dealing with Principal Seymour Skinner’s attempt to impress his boss (Superintendent Chalmers) with a home-cooked meal.

One thing leads to another (and by one thing to another, I mean a fire in the kitchen as Skinner attempts to portray fast-food burgers as “steamed hams” and not the “steamed clams” promised earlier). That culminates in this spectacular cover-up by Principal Skinner when the superintendent asks about the extremely apparent fire occurring in the kitchen:

Principal Skinner: Oh well, that was wonderful. A good time was had by all. I’m pooped.

Chalmers: Yes. I should be– Good Lord! What is happening in there?

Principal Skinner: Aurora borealis.

Chalmers: Uh- Aurora borealis. At this time of year, at this time of day, in this part of the country, localized entirely within your kitchen?

Principal Skinner: Yes.

Chalmers [meekly]: May I see it?

Principal Skinner: No.

That is what happened here. Everyone opposing the EU Commission’s CSAM (i.e., “chat control”) efforts trotted out their experts, making it clearly apparent who was saying what and what their relevant expertise was. The EU insisted it had its own battery of experts. The ICCL said: “May we see them?”

The EU Commission: No.

Not good enough, said the ICCL. But that’s what a rights advocate would be expected to say. What’s less expected is the EU Commission’s ombudsman declaring the ICCL had the right to see this particularly specific aurora borealis.

After the Commission acknowledged to the EU Ombudsman that it, in fact, had such a list, but failed to disclose its existence to Dr Kris Shrishak, the Ombudsman held the Commission’s behaviour constituted “maladministration”.  

The Ombudsman held: “[t]he Commission did not identify the list of experts as falling within the scope of the complainant’s request. This means that the complainant did not have the opportunity to challenge (the reasons for) the institution’s refusal to disclose the document. This constitutes maladministration.” 

As the report further notes, the only existing documentation of this supposed consultation with experts has been reduced to a single self-serving document issued by the EU Commission. Any objections or interjections were added/subtracted as preferred by the EU Commission before presenting a “final” version that served its preferences. Any supporting documentation, including comments from participating stakeholders, were sent to the digital shredder.

As concerns the EUIF meetings, the Commission representatives explained that three online technical workshops took place in 2020. During the first workshop, academics, experts and companies were invited to share their perspectives on the matter as well as any documents that could be valuable for the discussion. After this workshop, a first draft of the ‘outcome document’ was produced, which summarises the input given orally by the participants and references a number of relevant documents. This first draft was shared with the participants via an online file sharing service and some participants provided written comments. Other participants commented orally on the first draft during the second workshop. Those contributions were then added to the final version of the ‘outcome document’ that was presented during the third and final workshop for the participants’ endorsement. This ‘outcome document’ is the only document that was produced in relation to the substance of these workshops. It was subsequently shared with the EUIF. One year later, it was used as supporting information to the impact assessment report.

In other words, the EU took what it liked and included it. The rest of it disappeared from the permanent record, supposedly because the EU Commission routinely purges any email communications more than two years old. This is obviously ridiculous in this context, considering this particular piece of legislation has been under discussion for far longer than that.

But, in the end, the EU Commission wins because it’s the larger bureaucracy. The ombudsman refused to issue a recommendation. Instead, it instructs the Commission to treat the ICCL’s request as “new” and perform another search for documents. “Swiftly.” Great, as far as that goes. But it doesn’t go far. The ombudsman also says it believes the EU Commission when it says only its version of the EUIF report survived the periodic document cull.

In the end, all that survives is this: the EU consulted with affected entities. It asked them to comment on the proposal. It folded those comments into its presentation. It likely presented only comments that supported its efforts. Dissenting opinions were auto-culled by EU Commission email protocols. It never sought further input, despite having passed the two-year mark without having converted the proposal into law. All that’s left, the ombudsman says, is likely a one-sided version of the Commission’s proposal. And if the ICCL doesn’t like it, well… it will have to find some other way to argue with the “experts” the Commission either ignored or auto-deleted. The government wins, even without winning arguments. Go figure.

Source: Steamed Hams, Except It’s The EU Commission’s Alleged CSAM Regulation ‘Experts’ | Techdirt

WhatsApp chats backed up to Google Drive will soon take up storage space

You may want to check your Google account storage situation if you back up your WhatsApp conversations to Drive on Android. In 2018, WhatsApp and Google announced that you could save your WhatsApp chat history to Drive without it counting towards your storage quota. But starting in December 2023, backing up the messaging app to Drive will count towards your Google account cloud storage space if you’re WhatsApp beta user. If you don’t use the app’s beta version, you won’t be feeling the change in policy until next year when it “gradually” makes its way to all Android devices.

[…]

Google has linked to its storage management tools in its post to make it easier to remove large files or photos you no longer need. You can also delete items from within WhatsApp, so they’ll no longer be included in your next backup. Of course, you also have the option to purchase extra storage with Google One, which will set you back at least $2 a month for 100GB. The company promises to provide eligible users with “limited, one-time Google One promotions” soon, though, so it may be best to wait for those before getting a subscription. Take note that this change will only affect you if you back up your chat history using your personal account. If you have a Workspace account through your job or another organization, you don’t have to worry about WhatsApp taking up a chunk of your cloud storage space.

Source: WhatsApp chats backed up to Google Drive will soon take up storage space

Researchers printed a robotic hand with bones, ligaments and tendons for the first time

Researchers at the Zurich-based ETH public university, along with a US-based startup called Inkbit, have done the impossible. They’ve printed a robot hand complete with bones, ligaments and tendons for the very first time, representing a major leap forward in 3D printing technology. It’s worth noting that the various parts of the hand were printed simultaneously, and not cobbled together after the fact, as indicated in a research journal published in Nature.

Each of the robotic hand’s various parts were made from different polymers of varying softness and rigidity, using a new laser-scanning technique that lets 3D printers create “special plastics with elastic qualities” all in one go. This obviously opens up new possibilities in the fast-moving field of prosthetics, but also in any field that requires the production of soft robotic structures.

Basically, the researchers at Inkbit developed a method to 3D print slow-curing plastics, whereas the technology was previously reserved for fast-curing plastics. This hybrid printing method presents all kinds of advantages when compared to standard fast-cure projects, such as increased durability and enhanced elastic properties. The tech also allows us to mimic nature more accurately, as seen in the aforementioned robotic hand.

“Robots made of soft materials, such as the hand we developed, have advantages over conventional robots made of metal. Because they’re soft, there is less risk of injury when they work with humans, and they are better suited to handling fragile goods,” ETH Zurich robotics professor Robert Katzschmann writes in the study.

A robot dog or a pulley or something.
ETH Zurich/Thomas Buchner

This advancement still prints layer-by-layer, but an integrated scanner constantly checks the surface for irregularities before telling the system to move onto the next material type. Additionally, the extruder and scraper have been updated to allow for the use of slow-curing polymers. The stiffness can be fine-tuned for creating unique objects that suit various industries. Making human-like appendages is one use case scenario, but so is manufacturing objects that soak up noise and vibrations.

MIT-affiliated startup Inkbit helped develop this technology and has already begun thinking about how to make money off of it. The company will soon start to sell these newly-made printers to manufacturers but will also sell complex 3D-printed objects that make use of the technology to smaller entities.

Source: Researchers printed a robotic hand with bones, ligaments and tendons for the first time

Google is testing community-sourced notes for search results

Google is experimenting with a feature that would allow people to add their own notes to search results for anyone to see. In theory, this would make results more helpful, providing a bit of human perspective — like feedback on recipe links or tips relating to travel queries — so people can better find the information that’s relevant to them. Notes are available now as an opt-in feature in Google’s Search Labs.

Search Labs is where Google tests new features that may or may not eventually make it to its flagship search engine. For those who are enrolled and have opted in for the Notes experiment, a Notes button will appear in Search and Discover, and tapping that will pull up all the insights other people have shared about a given article. You can also add your own, and dress it up with stickers, photos and, down the line (for US users only), AI-generated images.

A Note on a recipe from Google Search
Google

While community-sourced notes sound a bit like a recipe for disaster in an age of rampant misinformation and trolling, especially with the inclusion of AI imagery, Google says it will use “a combination of algorithmic protections and human moderation to make sure notes are as safe, helpful and relevant as possible, and to protect against harmful or abusive content.” The company is also looking into ways to let site owners add notes to their own pages.

It’s still just a test, and users will have the opportunity to submit feedback based on their experiences with Notes. The experimental feature has started rolling out for Search Labs on Android and iOS in the US and India.

Source: Google is testing community-sourced notes for search results

Researchers use magnetic fields for non-invasive blood glucose monitoring

Synex Medical, a Toronto-based biotech research firm backed by Sam Altman (the CEO of OpenAI), has developed a tool that can measure your blood glucose levels without a finger prick. It uses a combination of low-field magnets and low-frequency radio waves to directly measure blood sugar levels non-invasively when a user inserts a finger into the device.

The tool uses magnetic resonance spectroscopy (MRS), which is similar to an MRI. Jamie Near, an Associate Professor at the University of Toronto who specializes in the research of MRS technology told Engadget that, “[an] MRI uses magnetic fields to make images of the distribution of hydrogen protons in water that is abundant in our body tissues. In MRS, the same basic principles are used to detect other chemicals that contain hydrogen.” When a user’s fingertip is placed inside the magnetic field, the frequency of a specific molecule, in this case glucose, is measured in parts per million. While the focus was on glucose for this project, MRS could be used to measure metabolites, according to the Synex, including lactate, ketones and amino acids.

[…]

“MRI machines can fit an entire human body and have been used to target molecule concentrations in the brain through localized spectroscopy,” he explained. “Synex has shrunk this technology to measure concentrations in a finger. I have reviewed their white paper and seen the instrument work.” Simpson said Synex’s ability to retrofit MRS technology into a small box is an engineering feat.

[…]

But there is competition in the space for no-prick diagnostics tools. Know Labs is trying to get approval for a portable glucose monitor that relies on a custom-made Bio-RFID sensing technology, which uses radio waves to detect blood glucose levels in the palm of your hand. When the Know Labs device was tested up against a Dexcom G6 continuous glucose monitor in a study, readings of blood glucose levels using its palm sensor technology were “within threshold” only 46 percent of the time. While the readings are technically in accordance with FDA accuracy limits for a new blood glucose monitor, Know Labs is still working out kinks through scientific research before it can begin FDA clinical trials.

Another start-up, German company DiaMonTech, is currently developing a pocket-sized diagnostic device that is still being tested and fine-tuned to measure glucose through “photothermal detection.” It uses mid-infrared lasers that essentially scan the tissue fluid at the fingertip to detect glucose molecules. CNBC and Bloomberg reported that even Apple has been “quietly developing” a sensor that can check your blood sugar levels through its wearables, though the company never confirmed. A scientific director at Synex, Mohana Ray, told Engadget that eventually, the company would like to develop a wearable. But further miniaturization was needed before they could bring a commercial product to market.

[…]

Source: Researchers use magnetic fields for non-invasive blood glucose monitoring

Three thousand years’ worth of carbon monoxide records show positive impact of global intervention in the 1980s

An international team of scientists has reconstructed a historic record of the atmospheric trace gas carbon monoxide by measuring air in polar ice and air collected at an Antarctic research station.

 

The team, led by the French National Centre for Scientific Research (CNRS) and Australia’s national science agency, CSIRO, assembled the first complete record of concentrations in the southern hemisphere, based on measurements of air.

The findings are published in the journal Climate of the Past.

The record spans the last three millennia. CSIRO atmospheric scientist Dr. David Etheridge said that the record provides a rare positive story in the context of climate change.

“Atmospheric monoxide started climbing from its natural background level around the time of the industrial revolution, accelerating in the mid-1900s and peaking in the early-mid 1980s,” Dr. Etheridge said.

“The good news is that levels of the trace gas are now stable or even trending down and have been since the late 1980s—coinciding with the introduction of catalytic converters in cars.”

Carbon monoxide is a reactive gas that has important indirect effects on . It reacts with hydroxyl (OH) radicals in the atmosphere, reducing their abundance. Hydroxyl acts as a natural “detergent” for the removal of other gases contributing to climate change, including methane. Carbon monoxide also influences the levels of ozone in the lower atmosphere. Ozone is a greenhouse gas.

The authors have high confidence that a major cause of the late 1980s-decline was improved combustion technologies including the introduction of , an exhaust systems device used in vehicles.

“The stabilization of carbon monoxide concentrations since the 1980s is a fantastic example of the role that science and technology can play in helping us understand a problem and help address it,” Dr. Etheridge said.

[…]

“Because carbon monoxide is a reactive gas, it is difficult to measure long term trends because it is unstable in many air sample containers. Cold and clean however preserves carbon monoxide concentrations for millennia,” Dr. Etheridge said.

The CO data will be used to improve Earth systems models. This will primarily enable scientists to understand the effects that future emissions of CO and other gases (such as hydrogen) will have on pollution levels and climate as the global energy mix changes into the future.

More information: Xavier Faïn et al, Southern Hemisphere atmospheric history of carbon monoxide over the late Holocene reconstructed from multiple Antarctic ice archives, Climate of the Past (2023). DOI: 10.5194/cp-19-2287-2023

Source: Three thousand years’ worth of carbon monoxide records show positive impact of global intervention in the 1980s