Aging (for men) – what nobody told you: pee slippers

“The 100-year-old man set off in his pee-slippers (so called because men of an advanced age rarely pee farther than their shoes),”

― Jonas Jonasson, The 100-Year-Old Man Who Climbed Out the Window and Disappeared

Guys, as you get older your bladder power goes down. This has some consequences – you don’t pee very far and you don’t empty out fully after pissing, which leads to drippage in your underwear. You wake up (once, twice, three times) per night and go to the bathroom now. If you search up this kind of stuff, chances are you will have found overly serious conditions such as “Urinary Retention”, “Urinary Incontinence”, “Overflow Incontincence”, “Bladder Outlet Obstruction”, “Benign prostatic hyperplasia (BPH)”, “blood and or cloudy urine”, “Nocturia” and all kinds of other nasties. This is not about that. This is about some of the better tips I have found to handle this dripping life we have now found ourselves in.

TL;DR

You get old and your muscles get weaker, you can hold less and your piss tube gets blocked. You have to handle your pissing, so drink less before you travel or need to be somewhere, especially caffeinated drinks. Double void, lean forward and whistle to empty your bladder. Do pelvic floor muscles (kegels) for more control. After you piss, milk your piss tube (ureter) behind your balls quickly to empty the tube out. Put your legs up before sleeping and try to go to bed at the same time every night.

So what exactly happens to you as you get older?

As you age the whole system around your piss (the kidneys, bladder, ureter [=piss tube], prostrate) change naturally. The kidney becomes lighter and can’t filter as much blood. The arteries supplying blood to the kidneys narrow. For women the piss tube shortens (theirs is called the urethra) and becomes thinner, which increases the risk of being unable to piss, but for men this doesn’t change. The prostrate gland can grow though and can block your piss tube. All your life, your bladder muscles contract without you actually needing to pee, but these contractions are blocked by your spinal chord and brain controls. As you get older your system stops blocking these contractions, leading to more urine left in the bladder after you have taken a piss and you need to go more often. Not only that, but the muscles themselves weaken. The bladder wall itself becomes less elastic and so less able to hold much pee.

Further reading: Effects of Aging on the Urinary Tract – MSD Manual 2022 / Aging changes in the kidneys and bladder – Medline Plus (National Library of Medicine) 2022 / The Aging Bladder – National Library of Medicine, National Center for Biotechnology Information (2004)

Some actually useful tips for people who are just aging and not seriously ill

Medication use: Alter use of medications that could worsen urinary symptoms.

  • Talk to your doctor or pharmacist about prescription or over-the-counter medications that may be contributing to your BPH symptoms. Antihistamines and decongestants can cause problems for some.
  • If you use medications that could make you urinate more, don’t take them right before driving, traveling, attending an event, or going to bed.
  • Don’t rely on ineffective dietary supplements. Saw palmetto and other herbal supplements have failed rigorous scientific testing so far.

Fluid restriction: Change how much fluid you drink — and when — to prevent bothersome bathroom visits.

  • Don’t drink liquids before driving, traveling, or attending events where finding a bathroom quickly could be difficult.
  • Avoid drinking caffeinated or alcoholic beverages after dinner or within two hours of your bedtime.

Bladder habits: Change the timing and manner in which you empty your bladder to reduce symptoms or make them less disruptive.

  • Don’t hold it in; empty your bladder when you first get the urge.
  • When you are out in public, go to the bathroom and try to urinate when you get the chance, even if you don’t feel a need right then.
  • Take your time when urinating so you empty your bladder as much as possible.
  • Double void: After each time you urinate, try again right away.
  • On long airplane flights, avoid drinking alcohol, and try to urinate every 60 to 90 minutes.

Try these techniques to relieve common urinary symptoms without medication

  • Timed voids. Urinate at least every three to four hours. Never hold the urine.
  • Double void. Before leaving the restroom, try to empty your bladder a second time. Focuson relaxing the muscles of the pelvic floor. You may try running your hands under warm waterbefore your second void to trigger a relaxation response.
  • Drink plenty of fluids. Fluids keep the urinary tract hydrated and clean.
  • Have a bowel movement every day. The rectum is just behind the bladder. If it is a full, it can prevent the bladder from functioning properly. Increase your fruit, fiber, water and walkinguntil you have soft bowel movements and don’t have to strain. You may add over the counter medications like Senna (Sennakot, SennaGen), Colace (docusate) or Dulcolax (bisacodyl).
  • Comfort and privacy are necessary to empty completely. Give yourself time to go.
  • Leaning forward (and rocking) may promote urination.After you have finished passing urine, squeeze the pelvic floor to try to completely empty.
  • The sound of water can promote the bladder muscle to contract, but care should be taken
  • not to promote bladder muscle instability with overuse of this technique.
  • Tapping over the bladder may assist in triggering a contraction in some people.
  • Stroking or tickling the lower back may stimulate urination and has been reported to be helpful in some patients.
  • Whistling provides a sustained outward breath with a gentle increase in pressure in the abdomen that may help with emptying your bladder.
  • General relaxation techniques can help people who are tense and anxious about theircondition.
Techniques for Complete Bladder Emptying – Urology Group Virginia

Pelvic floor exercises

The pelvic floor consists of layers of muscles and ligaments that stretch like a hammock, from the
pubic bone at the front to the tip of the back bone, that help to support your bladder and bowel.
Pelvic floor exercises can be done in different positions:
• In a standing position, stand with your feet apart and tighten your pelvic floor muscles as if
you were trying to avoid breaking wind. If you look in a mirror you should see the base of your
penis move nearer to your abdomen and your testicles rise. Hold the contraction as strongly
as you can without holding your breath or tensing your buttocks. Perform this three times (as
strong as possible) in the morning, holding each for up to 10 seconds – and three times (as
strong as possible) in the evening, holding each for up to 10 seconds.
• In a sitting position, sit on a chair with your knees apart and tighten your pelvic floor muscles
as if you were trying to avoid breaking wind. Hold the contraction as strongly as you can
without holding your breath or tensing your buttocks. Perform this three times (as strong as
possible) in the morning, holding each for up to 10 seconds – and three times (as strong as
possible) in the evening, holding each for up to 10 seconds.
• In a lying position, lie on your back with your knees bent and your legs apart. Tighten your
pelvic floor muscles as if you were trying to avoid breaking wind and hold the contractions as
strongly as you can without holding your breath or tensing your buttocks. Perform this three
times (as strong as possible) in the morning, holding each for up to 10 seconds – and three
times (as strong as possible) in the evening, holding each for up to 10 seconds.
• While walking, tighten your pelvic floor muscles as you walk.
• After urinating and you have emptied your bladder, tighten your pelvic floor muscles as
strongly as you can to avoid an after dribble

Post micturition dribble exercise (dripping, drippage, dribbling after peeing)

• After passing urine, wait for a few seconds to allow the bladder to empty.
• Place your fingers behind the scrotum and apply gentle pressure to straighten out the urethra.
• Continue this whilst gently lifting and stroking to encourage the trapped urine to follow out.
• Before leaving the toilet, repeat the technique twice to ensure that the urethra is completely
empty.
This technique can easily be used at home. When in public toilets it can be done discreetly, with a
hand inside a trouser pocket.
It only takes a few seconds and will avoid the problem of stained trousers.
Pelvic floor exercises for men can also improve this problem as it improves the tone of your
muscles

Male pelvic floor exercises and post micturition dribble – NHS Western Isles 2022 (PDF)

Many men dribble urine shortly after they have finished using the toilet and the bladder feels empty. Even waiting a moment and shaking the penis before zipping up won’t stop it. The medical term for this is post-micturition dribbling. It’s common in older men because the muscles surrounding the urethra — the long tube in the penis that allows urine to pass out of the body — don’t squeeze as hard as they once did. This leaves a small pool of urine at a dip in the urethra behind the base of the penis. In less than a minute after finishing, this extra urine dribbles out.

Here’s a simple technique that should help. Right after your urine stream stops, “milk out” the last few drops of urine. Using the fingertips of one hand, begin about an inch behind your scrotum. Gently press upward. Keep applying this pressure as you move your fingers toward the base of the penis under the scrotum. Repeat once or twice. This should move the pooled urine into the penis. You can then shake out the last few drops. With practice, you should be able to do this quickly.

What can I do about urinary dribbling? – Men’s Health 2022

Kegel Exercises

Kegel exercises, also known as pelvic floor muscle exercises, are the easiest way for you to control urinary incontinence and stress incontinence, as they can be easily added to your daily routine.

To perform a Kegel exercise, you just need to squeeze your pelvic floor muscles. These are the same muscles you would use to stop the flow of urine.

Simply squeeze these muscles for 3 seconds and then relax. The National Institute of Diabetes and Digestive and Kidney Diseases (NIDDKD) suggests building up to 10-15 repetitions, 3 times a day. You can do these pelvic floor exercises while sitting or lying down.

Bladder Training

This is an effective way to overcome overactive bladder symptoms and gain more bladder control. The exercise trains your bladder to hold more urine before needing to empty it.

First, you need to determine your baseline. Make a diary of how often you need to go to the bathroom throughout the day. Then try to go to the bathroom less often, holding in the urine longer between visits. It may feel uncomfortable, but doing this will help you gain more bladder control.

Bladder Exercises — How to Strengthen Bladder Muscles – Urology of Greater Atlanta

How to stop pissing in the middle of the night

  • Limit liquids before bedtime: Avoid drinking water or other beverages at night to reduce the need to wake up to urinate.
  • Reduce caffeine and alcohol intake: Caffeine can trigger the bladder to become overactive and produce too much urine.Reduce your intake of caffeine and alcoholic beverages later in the afternoon and evening. 
  • Talk to your doctor about when to take medications: Some medications, such as diuretics, can increase nighttime urination. Ask your doctor about the ideal time to take medications so they don’t interfere with your sleep.
  • Strengthen your pelvic floor: Doctors recommend pelvic floor muscle exercises to help strengthen key muscles and control your urinary symptoms. 
  • Elevate or compress your legs: Some research has shown that you can reduce fluid buildup that leads to urination by elevating your legs or using compression socks before bedtime.
  • Practice good sleep hygiene: Healthy sleep hygiene can help you get better rest. Doctors recommend relaxing before bed, going to bed at the same time every night, and making sure your sleep environment is quiet, dark, and comfortable.
Frequent Urination at Night (Nocturia) – Sleep doctor 2023

Ok fellas, so hopefully we will stop dribbling into our pants a bit more. If you have any tips to improve on this guide then I look forward to hearing from you!

Next we will be looking at sleeping issues. This is a subject that seems to have some kind of taboo on it, but once you raise it, you realise that loads of people suffer from them.

Sarah Silverman’s retarded AI Case Isn’t Going Very Well Either

Just a few weeks ago Judge William Orrick massively trimmed back the first big lawsuit that was filed against generative AI companies for training their works on copyright-covered materials. Most of the case was dismissed, and what bits remained may not last much longer. And now, it appears that Judge Vince Chhabria (who has been very good on past copyright cases) seems poised to do the same.

This is the high profile case brought by Sarah Silverman and some other authors, because some of the training materials used by OpenAI and Meta included their works. As we noted at the time, that doesn’t make it copyright infringing, and it appears the judge recognizes the large hill Silverman and the other authors have to climb here:

U.S. District Judge Vince Chhabria said at a hearing that he would grant Meta’s motion to dismiss the authors’ allegations that text generated by Llama infringes their copyrights. Chhabria also indicated that he would give the authors permission to amend most of their claims.

Meta has not yet challenged the authors’ central claim in the case that it violated their rights by using their books as part of the data used to train Llama.

“I understand your core theory,” Chhabria told attorneys for the authors. “Your remaining theories of liability I don’t understand even a little bit.”

Chhabria (who you may recall from the time he quashed the ridiculous copyright subpoena that tried to abuse copyright law to expose whoever exposed a billionaire’s mistress) seems rightly skeptical that just because ChatGPT can give you a summary of Silverman’s book that it’s somehow infringing:

“When I make a query of Llama, I’m not asking for a copy of Sarah Silverman’s book – I’m not even asking for an excerpt,” Chhabria said.

The authors also argued that Llama itself is an infringing work. Chhabria said the theory “would have to mean that if you put the Llama language model next to Sarah Silverman’s book, you would say they’re similar.”

“That makes my head explode when I try to understand that,” Chhabria said.

It’s good to see careful judges like Chhabria and Orrick getting into the details here. Of course, with so many of these lawsuits being filed, I’m still worried that some judge is going to make a mess of things, but we’ll see what happens.

Source: Sarah Silverman’s AI Case Isn’t Going Very Well Either | Techdirt

“Make It Real” AI prototype wows UI devs by turning drawings into working software

collaborative whiteboard app maker called “tldraw” made waves online by releasing a prototype of a feature called “Make it Real” that lets users draw an image of software and bring it to life using AI. The feature uses OpenAI’s GPT-4V API to visually interpret a vector drawing into functioning Tailwind CSS and JavaScript web code that can replicate user interfaces or even create simple implementations of games like Breakout.

“I think I need to go lie down,” posted designer Kevin Cannon at the start of a viral X thread that featured the creation of functioning sliders that rotate objects on screen, an interface for changing object colors, and a working game of tic-tac-toe. Soon, others followed with demonstrations of drawing a clone of Breakout, creating a working dial clock that ticks, drawing the snake game, making a Pong game, interpreting a visual state chart, and much more.

Users can experiment with a live demo of Make It Real online. However, running it requires providing an API key from OpenAI, which is a security risk. If others intercept your API key, they could use it to rack up a very large bill in your name (OpenAI charges by the amount of data moving into and out of its API). Those technically inclined can run the code locally, but it will still require OpenAI API access.

Tldraw, developed by Steve Ruiz in London, is an open source collaborative whiteboard tool. It offers a basic infinite canvas for drawing, text, and media without requiring a login. Launched in 2021, the project received $2.7 million in seed funding and is supported by GitHub sponsors. When The GPT-4V API launched recently, Ruiz integrated a design prototype called “draw-a-ui” created by Sawyer Hood to bring the AI-powered functionality into tldraw.

GPT-4V is a version of OpenAI’s large language model that can interpret visual images and use them as prompts.  As AI expert Simon Willison explains on X, Make it Real works by “generating a base64 encoded PNG of the drawn components, then passing that to GPT-4 Vision” with a system prompt and instructions to turn the image into a file using Tailwind. In fact, here is the full system prompt that tells GPT-4V how to handle the inputs and turn them into functioning code:

const systemPrompt = ‘You are an expert web developer who specializes in tailwind css.
A user will provide you with a low-fidelity wireframe of an application.
You will return a single html file that uses HTML, tailwind css, and JavaScript to create a high fidelity website.
Include any extra CSS and JavaScript in the html file.
If you have any images, load them from Unsplash or use solid colored rectangles.
The user will provide you with notes in blue or red text, arrows, or drawings.
The user may also include images of other websites as style references. Transfer the styles as best as you can, matching fonts / colors / layouts.
They may also provide you with the html of a previous design that they want you to iterate from.
Carry out any changes they request from you.
In the wireframe, the previous design’s html will appear as a white rectangle.
Use creative license to make the application more fleshed out.
Use JavaScript modules and unpkg to import any necessary dependencies.’

As more people experiment with GPT-4V and combine it with other frameworks, we’ll likely see more novel applications of OpenAI’s vision-parsing technology emerging in the weeks ahead. Also on Wednesday, a developer used the GPT-4V API to create a live, real-time narration of a video feed by a fake AI-generated David Attenborough voice, which we have covered separately.

For now, it feels like we’ve been given a preview of a possible future mode of software development—or interface design, at the very least—where creating a working prototype is as simple as making a visual mock-up and having an AI model do the rest.

Source: “Make It Real” AI prototype wows devs by turning drawings into working software | Ars Technica

The EU DMA will finally free Windows users from Bing (but not Edge) and allow 3rd parties into the widgets

Microsoft will soon let Windows 11 users in the European Economic Area (EEA) disable its Bing web search, remove Microsoft Edge, and even add custom web search providers — including Google if it’s willing to build one — into its Windows Search interface.

All of these Windows 11 changes are part of key tweaks that Microsoft has to make to its operating system to comply with the European Commission’s Digital Markets Act, which comes into effect in March 2024. Microsoft will be required to meet a slew of interoperability and competition rules, including allowing users “to easily un-install pre-installed apps or change default settings on operating systems, virtual assistants, or web browsers that steer them to the products and services of the gatekeeper and provide choice screens for key services.”

Alongside clearly marking which apps are system components in Windows 11, Microsoft is also responding by adding the ability to uninstall the following apps:

  • Camera
  • Cortana
  • Web Search from Microsoft Bing, in the EEA
  • Microsoft Edge, in the EEA
  • Photos

Only Windows 11 users in the EEA will be able to fully remove Microsoft Edge and the Bing-powered web search from Windows Search. Microsoft could easily extend this to all Windows 11 users, but it’s limiting this extra functionality to EEA markets to comply with the rules. “Windows uses the region chosen by the customer during device setup to identify if the PC is in the EEA,” explains Microsoft in a blog post. “Once chosen in device setup, the region used for DMA compliance can only be changed by resetting the PC.”

In EEA markets — which includes EU countries and also Iceland, Liechtenstein, and Norway — Windows 11 users will also get access to new interoperability features for feeds in the Windows Widgets board and web search in Windows Search. This will allow search providers like Google to extend the main Windows Search interface with their own custom web searches.

[…]

We had hoped Microsoft would finally stop forcing Windows 11 users in Europe into Edge if they clicked a link from the Windows Widgets panel or from search results, but Microsoft appears to have changed exactly how it’s implementing this. The software maker previously said it would start testing a change to Windows 11 that would see “Windows system components use the default browser to open links” in EEA markets, but that change never appeared in Windows Insider builds.

“In the EEA, Windows will always use the customers’ configured app default settings for link and file types, including industry standard browser link types (http, https),” says Microsoft. “Apps choose how to open content on Windows, and some Microsoft apps will choose to open web content in Microsoft Edge.”

[…]

Source: The EU will finally free Windows users from Bing – The Verge

Zimbra email vulnerability let hackers steal gov data – fix (and exploit) was easily visible on repository before updates

Google’s Threat Analysis Group revealed on Thursday that it discovered and worked to help patch an email server flaw used to steal data from governments in Greece, Moldova, Tunisia, Vietnam and Pakistan. The exploit, known as CVE-2023-37580, targeted email server Zimbra Collaboration to pilfer email data, user credentials and authentication tokens from organizations.

It started in Greece at the end of June. Attackers that discovered the vulnerability and sent emails to a government organization containing the exploit. If someone clicked the link while logged into their Zimbra account, it automatically stole email data and set up auto-forwarding to take control of the address.

While Zimbra published a hotfix on open source platform Github on July 5, most of the activity deploying the exploit happened afterward. That means targets didn’t get around to updating the software with the fix until it was too late. It’s a good reminder to update the devices you’ve been ignoring now, and ASAP as more updates become available. “These campaigns also highlight how attackers monitor open-source repositories to opportunistically exploit vulnerabilities where the fix is in the repository, but not yet released to users,” the Google Threat Analysis Group wrote in a blog post.

Around mid-July, it became clear that threat group Winter Vivern got ahold of the exploit. Winter Vivern targeted government organizations in Moldova and Tunisia. Then, a third unknown actor used the exploit to phish for credentials from members of the Vietnam government. That data got published to an official government domain, likely run by the attackers. The final campaign Google’s Threat Analysis Group detailed targeted a government organization in Pakistan to steal Zimbra authentication tokens, a secure piece of information used to access locked or protected information.

Zimbra users were also the target of a mass-phishing campaign earlier this year. Starting in April, an unknown threat actor sends an email with a phishing link in an HTML file, according to ESET researchers. Before that, in 2022, threat actors used a different Zimbra exploit to steal emails from European government and media organizations.

As of 2022, Zimbra said it had more than 200,000 customers, including over 1,000 government organizations. “The popularity of Zimbra Collaboration among organizations expected to have lower IT budgets ensures that it stays an attractive target for adversaries,” ESET researchers said about why attackers target Zimbra.

Source: An email vulnerability let hackers steal data from governments around the world

The Oura Ring Is a $300 Sleep Tracker Suddenly needs a Subscription

[…] Now in its third iteration, the Oura Ring tracks and analyzes a host of metrics, including your heart-rate variability (HRV), blood oxygen rate, body temperature, and sleep duration. It uses this data to give you three daily scores, tallying the quality of your sleep, activity, and “readiness.” It can also determine your chronotype (your body’s natural preferences for sleep or wakefulness), give insight into hormonal factors that can affect your sleep, and (theoretically) alert you when you’re getting sick.

I wore the Oura Ring for six months; it gave me tons of data about myself and helped me pinpoint areas in my sleep and health that I could improve. It’s also more comfortable and discreet to wear than most wristband wearable trackers.

However, the ring costs about $300 or more, depending on the style and finish, and Oura’s app now requires a roughly $72 yearly subscription to access most of the data and reports.

(Oura recently announced that the cost of the ring is eligible for reimbursement through a flexible spending account [FSA] or health spending account [HSA]. The subscription is not.)

If you just want to track your sleep cycles and get tips, a free (or modestly priced) sleep-tracking app may do the trick.

[…]

Source: The Oura Ring Is a $300 Sleep Tracker That Provides Tons of Data. But Is It Worth It? | Reviews by Wirecutter

So what do you get with the membership?

  • In-depth sleep analysis, every morning
  • Personalized health insights, 24/7
  • Live & accurate heart rate monitoring
  • Body temperature readings for early illness detection and period prediction (in beta)
  • Workout Heart Rate Tracking
  • Sp02 Monitoring
  • Rest Mode
  • Bedtime Guidance
  • Track More Movement
  • Restorative Time
  • Trends Over Time
  • Tags
  • Insights from Audio Sessions

And what if you want to continue for free?

Non-paying members have access to 3 simple daily scores: Sleep, Readiness, and Activity, as well as our interactive and educational Explore content.

Source: More power to you with Oura Membership.

This is a pretty stunning turn of events:

one because it was supposed to be the privacy friendly option, so what data are they sending to central servers and why (that’s the only way they can justify a subscription) and

two why is data that doesn’t need to be sent to the servers not being shown in the free version of the app?!

For the price of the ring this is a pretty shameless money grab.

The Netherlands wants EU measures against misleading “discounts” on altered prices

From January 1, 2023, a seller may no longer increase the price of a product for a short period of time, then reduce the price and then present this ‘before’ price as an offer or a significant discount.

Despite this tightening, consumers are still faced with misleading discounts, especially in the run-up to the holidays. Unfortunately, according to the regulator ACM, the new rules are not being sufficiently complied with. In addition, sellers often refer to a suggested retail price when offering offers instead of the original retail price of the product.

That is why Minister Adriaansens is calling for a new EU rule. This should no longer allow companies to mention the suggested retail prices suggested by manufacturers in discount promotions if sellers do not actually use them. The use of completely invented recommended prices is already legally prohibited.

The Netherlands also wants the EU to make it possible for a Member State to ban door sales and/or telemarketing.

[…]

Source: Nederland wil maatregelen tegen misleiding bij kortingen door adviesprijzen – Emerce

Cracking group files SEC complaint on hacked company for failure to disclose breach

affiliates of ransomware gang AlphV (aka BlackCat) claimed to have compromised digital lending firm MeridianLink – and reportedly filed an SEC complaint against the fintech firm for failing to disclose the intrusion to the US watchdog.

First reported by DataBreaches, the break-in apparently happened on November 7. AlphaV’s operatives claimed they did not encrypt any files but did steal some data – and MeridianLink was allegedly aware of the intrusion the day it occurred.

In screenshots shared with The Register and posted on social media, the AlphaV SEC submission claims MeridianLink made a “material misstatement or omission” in its filings and financial statements, “or a failure to file.”

The thoughtful folks at AlphV asserted they are simply filing the paperwork for MeridianLink – and giving it “24 hours before we publish the data in its entirety.”

The Register asked the SEC about the AlphV complaint. “We decline to comment,” the spokesperson replied.

Source: Clorox CISO flushes self after multi-million-dollar attack • The Register

The Epic Vs. Google Courtroom Battle Shows Google Routinely Hiding and Deleting Chats and Documents They Should (legally) Keep

[…] back in 2020 Epic added an option to Fortnite on mobile that let players buy Fortnite’s in-game V-Bucks currency directly from the company at a discount, bypassing both Apple’s and Google’s app store fees. This violated Apple and Google policies Epic agreed to and quickly led to both companies removing Fortnite from their respective mobile phone app stores. That triggered a lawsuit from Epic and led to a protracted 2021 legal fight against Apple over how Apple ran its app store, the monopoly it may have had, and the fees it charged app developers on in-app purchases. And now Epic is waging a similar legal battle against Google.

[…]

As reported by The Verge on November 6, the first day of the trial, Epic was allowed to tell the jury that Google may have destroyed or hidden relevant evidence. And throughout the first six-days of the trial, Epic’s lawyers have continued to bring up how few chatlogs Google provided during discovery and grilled Google execs over deleted chats and jokes about hiding conversations.

On November 7, Google Information Governance Lead Genaro Lopez was questioned multiple times about the seemingly missing chatlogs, and the company’s policy of telling employees to chat “off the record” about sensitive issues that could cause problems later down the line. Epic’s legal team also went after Google’s chat system, which includes a tool that lets its employees prevent chat history from being saved, and pointed out that Google employees were doing this even after a legal hold was put on the company following the Fortnite lawsuit. Asked if Google could have changed this policy and forced chats to be saved, Lopez agreed that it could have been altered, but wasn’t.

“You cannot guarantee that the documents that were destroyed will contradict the testimony we’re going to hear?” asked Epic’s lawyer. Lopez couldn’t make that guarantee.

On November 8, Google Play’s VP of Apps and Games Purnima Kochikar was also questioned about deleted chats and explained that the court won’t ever see her chat logs.

“During this case, you had your default setting to delete chats every 24 hours, correct?” Epic’s legal team asked.

“That was the default,” Kochikar said. She also confirmed she didn’t take any steps to change this setting.

An image shows characters from Fortnite in front of a yellow background.
Image: Epic Games

On November 9, some saved chat messages from Google’s head of platforms & ecosystems strategy for Android, Margaret Lam, showed her directly asking someone to turn off chat history due to “sensitivity with legal these days :)”.

Lam claimed in court that no Google attorney had briefed her on preserving chats during Epic’s legal hold. However, Epic’s lawyers weren’t done, and continued to show messages in which Lam asked people to turn off chat history. The Verge reports that one of these situations included a colleague pushing back and insisting that he was on a legal hold. In response, Lam messaged: “Ok maybe I take you off this convo :)”.

At another point, Lam messaged someone else: “also just realized our history is on 🙊 can we turn it off? Haha”.

Lam did push back, claiming that she went to legal for better advice after these conversations and now understands she failed to comply with the legal hold.

Then on November 13, James Kolotouros, VP of Android platform partnerships, admitted that he can’t remember a single instance when he might have turned on his chat history.

Google’s CEO wasn’t saving evidence, either

And today, during Google CEO Sundar Pichai’s time on the stand, Epic was able to get him to confirm that he also wasn’t saving his chats, letting messages auto-delete after 24 hours. Epic also showed evidence of Pichai asking for chat history to be turned off and then trying to delete that message, though the Google CEO claimed that was a glitch.

Not only that, Pichai confirmed that he has in the past marked documents with attorney/client privilege even when he was not seeking legal advice just so those emails didn’t get forwarded. Pichai told Epic’s lawyers that nobody told him that was wrong, though he now admits that he shouldn’t have done that.

Epic’s goal for all of this has been to show that Google might have been deleting chats or hiding evidence. That would help it make the case to the jury that the Android platform creator is trying to avoid creating a legal paper trail which could imply the company has something to hide from the court. That in turn makes Google seem less trustworthy and helps color all of its actions in a different light, something that could ultimately swing a jury one way or the other.

Regardless of if the jury cares about what has happened, the judge in the case very much seems to. Judge James Donato appears so fed up with the situation that on November 13, he demanded that Google’s chief legal officer show up in court by November 16 to explain what’s going on. If he doesn’t show or can’t give a good enough reason for why so much evidence was seemingly destroyed, the judge is considering instructing the jury to not trust Google as much as they might have before.

Needless to say, such a turn would not be good for Google’s fortunes in its continuing proceedings with Epic.

Source: The Epic Vs. Google Courtroom Battle Sounds Bonkers

Rivian update bricks infotainment – corp comms quickly and publicly on Reddit

Hi All,

We made an error with the 2023.42 OTA update – a fat finger where the wrong build with the wrong security certificates was sent out. We cancelled the campaign and we will restart it with the proper software that went through the different campaigns of beta testing.

Service will be contacting impacted customers and will go through the resolution options. That may require physical repair in some cases.

This is on us – we messed up. Thanks for your support and your patience as we go through this.

* Update 1 (11/13, 10:45 PM PT): The issue impacts the infotainment system. In most cases, the rest of the vehicle systems are still operational. A vehicle reset or sleep cycle will not solve the issue. We are validating the best options to address the issue for the impacted vehicles. Our customer support team is prioritizing support for our customers related to this issue. Thank you.

*Update 2 (11/14, 11:30 AM PT): Hi all, As I mentioned yesterday, we identified an issue in our recent software update 2023.42.0 that impacted the infotainment system on a number of R1T and R1S vehicles. In most cases, the rest of the vehicle systems and the mobile app will remain functional. If you’re an impacted owner, you should have received an email and a text communication. We understand that this is frustrating and we are really sorry for this inconvenience. The team continues to actively work on the best possible solution to fix the impacted vehicles, and we will keep the community updated. In the meantime, our Service team is prioritizing this issue and you can reach out to them at 1-855-748-4265.

*Update 3 (11/14, 7 PM PT): We just emailed the impacted owners with next steps. The team managed to build a solution, and we will start rolling it out tomorrow.

*Update 4 (11/15 11:30 AM PT): the team has been able to build a solution that fixes the issue remotely. Roll out starting today. Thanks to the community for the support.

Source: 2023.42 OTA Update Issue : Rivian

As far as I am concerned well done – everyone was kept informed and a tough problem to fix was rolled out fairly quickly. Mistakes happen everywhere, so it’s more important that they are fixed and that people are informed.

It does, however, highlight the security issues of automatic updates.

Clorox CISO leaves after > 1/3rd billion spent on breach

The Clorox Company’s chief security officer has left her job in the wake of a corporate network breach that cost the manufacturer hundreds of millions of dollars.

[…]

Chau Banks, the chief information and data officer of the $7 billion biz, who reportedly penned the memo, will fill Bogac’s role as Clorox continues mopping up the mess searches for and hires a replacement.

[…]

Clorox first disclosed its computer network had been compromised in a US Securities and Exchange Commission filing in August. At the time, it said some of its IT systems and operations had been “temporarily impaired” due to “unauthorized activity” in its IT environment.

A subsequent SEC filing in September noted “wide scale disruption” across the business because of the intrusion.

Those disruptions included processing orders by hand after some systems were taken offline.

[…]

In its first-quarter fiscal 2024 earnings report at the start of this month, Clorox reported a 20 percent drop in year-on-year Q1 net sales and noted the $356 million decrease was “driven largely” by the cyberattack.

In a subsequent SEC filing, Clorox noted that expenses related to the network break-in for the three months ending September 30 totaled $24 million.

“The costs incurred relate primarily to third-party consulting services, including IT recovery and forensic experts and other professional services incurred to investigate and remediate the attack, as well as incremental operating costs incurred from the resulting disruption to the company’s business operations,” according to the Form 10-Q filing.

Clorox also revealed it expects to incur more expenses related to the security super-snafu in future periods

[…]

 

Source: Clorox CISO flushes self after multimillion-dollar attack

The EU Commission’s Alleged CSAM Regulation ‘Experts’ giving them free reign to spy on everyone: can’t be found. OK then.

Everyone who wants client-side scanning to be a thing insists it’s a good idea with no potential downsides. The only hangup, they insist, is tech companies’ unwillingness to implement it. And by “implement,” I mean — in far too many cases — introducing deliberate (and exploitable!) weaknesses in end-to-end encryption.End-to-end encryption only works if both ends are encrypted. Taking the encryption off one side to engage in content scanning makes it half of what it was. And if you get in the business of scanning users’ content for supposed child sexual abuse material (CSAM), governments may start asking you to “scan” for other stuff… like infringing content, terrorist stuff, people talking about crimes, stuff that contradicts the government’s narratives, things political rivals are saying. The list goes on and on.Multiple experts have pointed out how the anti-CSAM efforts preferred by the EU would not only not work, but also subject millions of innocent people to the whims of malicious hackers and malicious governments. Governments also made these same points, finally forcing the EU Commission to back down on its attempt to undermine encryption, if not (practically) outlaw it entirely.The Commission has always claimed its anti-encryption, pro-client-side scanning stance is backed by sound advice given to it by the experts it has consulted. But when asked who was consulted, the EU Commission has refused to answer the question. This is from the Irish Council of Civil Liberties (ICCL), which asked the Commission a simple question, but — like the Superintendent Chalmers referenced in the headline — was summarily rejected. In response to a request for documents pertaining to the decision-making behind the proposed CSAM regulation, the European Commission failed to disclose a list of companies who were consulted about the technical feasibility of detecting CSAM without undermining encryption. This list

Everyone who wants client-side scanning to be a thing insists it’s a good idea with no potential downsides. The only hangup, they insist, is tech companies’ unwillingness to implement it. And by “implement,” I mean — in far too many cases — introducing deliberate (and exploitable!) weaknesses in end-to-end encryption.

End-to-end encryption only works if both ends are encrypted. Taking the encryption off one side to engage in content scanning makes it half of what it was. And if you get in the business of scanning users’ content for supposed child sexual abuse material (CSAM), governments may start asking you to “scan” for other stuff… like infringing content, terrorist stuff, people talking about crimes, stuff that contradicts the government’s narratives, things political rivals are saying. The list goes on and on.

Multiple experts have pointed out how the anti-CSAM efforts preferred by the EU would not only not work, but also subject millions of innocent people to the whims of malicious hackers and malicious governments. Governments also made these same points, finally forcing the EU Commission to back down on its attempt to undermine encryption, if not (practically) outlaw it entirely.

The Commission has always claimed its anti-encryption, pro-client-side scanning stance is backed by sound advice given to it by the experts it has consulted. But when asked who was consulted, the EU Commission has refused to answer the question. This is from the Irish Council of Civil Liberties (ICCL), which asked the Commission a simple question, but — like the Superintendent Chalmers referenced in the headline — was summarily rejected.

In response to a request for documents pertaining to the decision-making behind the proposed CSAM regulation, the European Commission failed to disclose a list of companies who were consulted about the technical feasibility of detecting CSAM without undermining encryption. This list “clearly fell within the scope” of the Irish Council for Civil Liberties’ request. 

If you’re not familiar with the reference, we’ll get you up to speed.

22 Short Films About Springfield is an episode of “The Simpsons” that originally aired in 1996. One particular “film” has become an internet meme legend: the one dealing with Principal Seymour Skinner’s attempt to impress his boss (Superintendent Chalmers) with a home-cooked meal.

One thing leads to another (and by one thing to another, I mean a fire in the kitchen as Skinner attempts to portray fast-food burgers as “steamed hams” and not the “steamed clams” promised earlier). That culminates in this spectacular cover-up by Principal Skinner when the superintendent asks about the extremely apparent fire occurring in the kitchen:

Principal Skinner: Oh well, that was wonderful. A good time was had by all. I’m pooped.

Chalmers: Yes. I should be– Good Lord! What is happening in there?

Principal Skinner: Aurora borealis.

Chalmers: Uh- Aurora borealis. At this time of year, at this time of day, in this part of the country, localized entirely within your kitchen?

Principal Skinner: Yes.

Chalmers [meekly]: May I see it?

Principal Skinner: No.

That is what happened here. Everyone opposing the EU Commission’s CSAM (i.e., “chat control”) efforts trotted out their experts, making it clearly apparent who was saying what and what their relevant expertise was. The EU insisted it had its own battery of experts. The ICCL said: “May we see them?”

The EU Commission: No.

Not good enough, said the ICCL. But that’s what a rights advocate would be expected to say. What’s less expected is the EU Commission’s ombudsman declaring the ICCL had the right to see this particularly specific aurora borealis.

After the Commission acknowledged to the EU Ombudsman that it, in fact, had such a list, but failed to disclose its existence to Dr Kris Shrishak, the Ombudsman held the Commission’s behaviour constituted “maladministration”.  

The Ombudsman held: “[t]he Commission did not identify the list of experts as falling within the scope of the complainant’s request. This means that the complainant did not have the opportunity to challenge (the reasons for) the institution’s refusal to disclose the document. This constitutes maladministration.” 

As the report further notes, the only existing documentation of this supposed consultation with experts has been reduced to a single self-serving document issued by the EU Commission. Any objections or interjections were added/subtracted as preferred by the EU Commission before presenting a “final” version that served its preferences. Any supporting documentation, including comments from participating stakeholders, were sent to the digital shredder.

As concerns the EUIF meetings, the Commission representatives explained that three online technical workshops took place in 2020. During the first workshop, academics, experts and companies were invited to share their perspectives on the matter as well as any documents that could be valuable for the discussion. After this workshop, a first draft of the ‘outcome document’ was produced, which summarises the input given orally by the participants and references a number of relevant documents. This first draft was shared with the participants via an online file sharing service and some participants provided written comments. Other participants commented orally on the first draft during the second workshop. Those contributions were then added to the final version of the ‘outcome document’ that was presented during the third and final workshop for the participants’ endorsement. This ‘outcome document’ is the only document that was produced in relation to the substance of these workshops. It was subsequently shared with the EUIF. One year later, it was used as supporting information to the impact assessment report.

In other words, the EU took what it liked and included it. The rest of it disappeared from the permanent record, supposedly because the EU Commission routinely purges any email communications more than two years old. This is obviously ridiculous in this context, considering this particular piece of legislation has been under discussion for far longer than that.

But, in the end, the EU Commission wins because it’s the larger bureaucracy. The ombudsman refused to issue a recommendation. Instead, it instructs the Commission to treat the ICCL’s request as “new” and perform another search for documents. “Swiftly.” Great, as far as that goes. But it doesn’t go far. The ombudsman also says it believes the EU Commission when it says only its version of the EUIF report survived the periodic document cull.

In the end, all that survives is this: the EU consulted with affected entities. It asked them to comment on the proposal. It folded those comments into its presentation. It likely presented only comments that supported its efforts. Dissenting opinions were auto-culled by EU Commission email protocols. It never sought further input, despite having passed the two-year mark without having converted the proposal into law. All that’s left, the ombudsman says, is likely a one-sided version of the Commission’s proposal. And if the ICCL doesn’t like it, well… it will have to find some other way to argue with the “experts” the Commission either ignored or auto-deleted. The government wins, even without winning arguments. Go figure.

Source: Steamed Hams, Except It’s The EU Commission’s Alleged CSAM Regulation ‘Experts’ | Techdirt

WhatsApp chats backed up to Google Drive will soon take up storage space

You may want to check your Google account storage situation if you back up your WhatsApp conversations to Drive on Android. In 2018, WhatsApp and Google announced that you could save your WhatsApp chat history to Drive without it counting towards your storage quota. But starting in December 2023, backing up the messaging app to Drive will count towards your Google account cloud storage space if you’re WhatsApp beta user. If you don’t use the app’s beta version, you won’t be feeling the change in policy until next year when it “gradually” makes its way to all Android devices.

[…]

Google has linked to its storage management tools in its post to make it easier to remove large files or photos you no longer need. You can also delete items from within WhatsApp, so they’ll no longer be included in your next backup. Of course, you also have the option to purchase extra storage with Google One, which will set you back at least $2 a month for 100GB. The company promises to provide eligible users with “limited, one-time Google One promotions” soon, though, so it may be best to wait for those before getting a subscription. Take note that this change will only affect you if you back up your chat history using your personal account. If you have a Workspace account through your job or another organization, you don’t have to worry about WhatsApp taking up a chunk of your cloud storage space.

Source: WhatsApp chats backed up to Google Drive will soon take up storage space

Researchers printed a robotic hand with bones, ligaments and tendons for the first time

Researchers at the Zurich-based ETH public university, along with a US-based startup called Inkbit, have done the impossible. They’ve printed a robot hand complete with bones, ligaments and tendons for the very first time, representing a major leap forward in 3D printing technology. It’s worth noting that the various parts of the hand were printed simultaneously, and not cobbled together after the fact, as indicated in a research journal published in Nature.

Each of the robotic hand’s various parts were made from different polymers of varying softness and rigidity, using a new laser-scanning technique that lets 3D printers create “special plastics with elastic qualities” all in one go. This obviously opens up new possibilities in the fast-moving field of prosthetics, but also in any field that requires the production of soft robotic structures.

Basically, the researchers at Inkbit developed a method to 3D print slow-curing plastics, whereas the technology was previously reserved for fast-curing plastics. This hybrid printing method presents all kinds of advantages when compared to standard fast-cure projects, such as increased durability and enhanced elastic properties. The tech also allows us to mimic nature more accurately, as seen in the aforementioned robotic hand.

“Robots made of soft materials, such as the hand we developed, have advantages over conventional robots made of metal. Because they’re soft, there is less risk of injury when they work with humans, and they are better suited to handling fragile goods,” ETH Zurich robotics professor Robert Katzschmann writes in the study.

A robot dog or a pulley or something.
ETH Zurich/Thomas Buchner

This advancement still prints layer-by-layer, but an integrated scanner constantly checks the surface for irregularities before telling the system to move onto the next material type. Additionally, the extruder and scraper have been updated to allow for the use of slow-curing polymers. The stiffness can be fine-tuned for creating unique objects that suit various industries. Making human-like appendages is one use case scenario, but so is manufacturing objects that soak up noise and vibrations.

MIT-affiliated startup Inkbit helped develop this technology and has already begun thinking about how to make money off of it. The company will soon start to sell these newly-made printers to manufacturers but will also sell complex 3D-printed objects that make use of the technology to smaller entities.

Source: Researchers printed a robotic hand with bones, ligaments and tendons for the first time

Google is testing community-sourced notes for search results

Google is experimenting with a feature that would allow people to add their own notes to search results for anyone to see. In theory, this would make results more helpful, providing a bit of human perspective — like feedback on recipe links or tips relating to travel queries — so people can better find the information that’s relevant to them. Notes are available now as an opt-in feature in Google’s Search Labs.

Search Labs is where Google tests new features that may or may not eventually make it to its flagship search engine. For those who are enrolled and have opted in for the Notes experiment, a Notes button will appear in Search and Discover, and tapping that will pull up all the insights other people have shared about a given article. You can also add your own, and dress it up with stickers, photos and, down the line (for US users only), AI-generated images.

A Note on a recipe from Google Search
Google

While community-sourced notes sound a bit like a recipe for disaster in an age of rampant misinformation and trolling, especially with the inclusion of AI imagery, Google says it will use “a combination of algorithmic protections and human moderation to make sure notes are as safe, helpful and relevant as possible, and to protect against harmful or abusive content.” The company is also looking into ways to let site owners add notes to their own pages.

It’s still just a test, and users will have the opportunity to submit feedback based on their experiences with Notes. The experimental feature has started rolling out for Search Labs on Android and iOS in the US and India.

Source: Google is testing community-sourced notes for search results

Researchers use magnetic fields for non-invasive blood glucose monitoring

Synex Medical, a Toronto-based biotech research firm backed by Sam Altman (the CEO of OpenAI), has developed a tool that can measure your blood glucose levels without a finger prick. It uses a combination of low-field magnets and low-frequency radio waves to directly measure blood sugar levels non-invasively when a user inserts a finger into the device.

The tool uses magnetic resonance spectroscopy (MRS), which is similar to an MRI. Jamie Near, an Associate Professor at the University of Toronto who specializes in the research of MRS technology told Engadget that, “[an] MRI uses magnetic fields to make images of the distribution of hydrogen protons in water that is abundant in our body tissues. In MRS, the same basic principles are used to detect other chemicals that contain hydrogen.” When a user’s fingertip is placed inside the magnetic field, the frequency of a specific molecule, in this case glucose, is measured in parts per million. While the focus was on glucose for this project, MRS could be used to measure metabolites, according to the Synex, including lactate, ketones and amino acids.

[…]

“MRI machines can fit an entire human body and have been used to target molecule concentrations in the brain through localized spectroscopy,” he explained. “Synex has shrunk this technology to measure concentrations in a finger. I have reviewed their white paper and seen the instrument work.” Simpson said Synex’s ability to retrofit MRS technology into a small box is an engineering feat.

[…]

But there is competition in the space for no-prick diagnostics tools. Know Labs is trying to get approval for a portable glucose monitor that relies on a custom-made Bio-RFID sensing technology, which uses radio waves to detect blood glucose levels in the palm of your hand. When the Know Labs device was tested up against a Dexcom G6 continuous glucose monitor in a study, readings of blood glucose levels using its palm sensor technology were “within threshold” only 46 percent of the time. While the readings are technically in accordance with FDA accuracy limits for a new blood glucose monitor, Know Labs is still working out kinks through scientific research before it can begin FDA clinical trials.

Another start-up, German company DiaMonTech, is currently developing a pocket-sized diagnostic device that is still being tested and fine-tuned to measure glucose through “photothermal detection.” It uses mid-infrared lasers that essentially scan the tissue fluid at the fingertip to detect glucose molecules. CNBC and Bloomberg reported that even Apple has been “quietly developing” a sensor that can check your blood sugar levels through its wearables, though the company never confirmed. A scientific director at Synex, Mohana Ray, told Engadget that eventually, the company would like to develop a wearable. But further miniaturization was needed before they could bring a commercial product to market.

[…]

Source: Researchers use magnetic fields for non-invasive blood glucose monitoring

Three thousand years’ worth of carbon monoxide records show positive impact of global intervention in the 1980s

An international team of scientists has reconstructed a historic record of the atmospheric trace gas carbon monoxide by measuring air in polar ice and air collected at an Antarctic research station.

 

The team, led by the French National Centre for Scientific Research (CNRS) and Australia’s national science agency, CSIRO, assembled the first complete record of concentrations in the southern hemisphere, based on measurements of air.

The findings are published in the journal Climate of the Past.

The record spans the last three millennia. CSIRO atmospheric scientist Dr. David Etheridge said that the record provides a rare positive story in the context of climate change.

“Atmospheric monoxide started climbing from its natural background level around the time of the industrial revolution, accelerating in the mid-1900s and peaking in the early-mid 1980s,” Dr. Etheridge said.

“The good news is that levels of the trace gas are now stable or even trending down and have been since the late 1980s—coinciding with the introduction of catalytic converters in cars.”

Carbon monoxide is a reactive gas that has important indirect effects on . It reacts with hydroxyl (OH) radicals in the atmosphere, reducing their abundance. Hydroxyl acts as a natural “detergent” for the removal of other gases contributing to climate change, including methane. Carbon monoxide also influences the levels of ozone in the lower atmosphere. Ozone is a greenhouse gas.

The authors have high confidence that a major cause of the late 1980s-decline was improved combustion technologies including the introduction of , an exhaust systems device used in vehicles.

“The stabilization of carbon monoxide concentrations since the 1980s is a fantastic example of the role that science and technology can play in helping us understand a problem and help address it,” Dr. Etheridge said.

[…]

“Because carbon monoxide is a reactive gas, it is difficult to measure long term trends because it is unstable in many air sample containers. Cold and clean however preserves carbon monoxide concentrations for millennia,” Dr. Etheridge said.

The CO data will be used to improve Earth systems models. This will primarily enable scientists to understand the effects that future emissions of CO and other gases (such as hydrogen) will have on pollution levels and climate as the global energy mix changes into the future.

More information: Xavier Faïn et al, Southern Hemisphere atmospheric history of carbon monoxide over the late Holocene reconstructed from multiple Antarctic ice archives, Climate of the Past (2023). DOI: 10.5194/cp-19-2287-2023

Source: Three thousand years’ worth of carbon monoxide records show positive impact of global intervention in the 1980s

Decoupling for IT Security (=privacy)

Whether we like it or not, we all use the cloud to communicate and to store and process our data. We use dozens of cloud services, sometimes indirectly and unwittingly. We do so because the cloud brings real benefits to individuals and organizations alike. We can access our data across multiple devices, communicate with anyone from anywhere, and command a remote data center’s worth of power from a handheld device.

But using the cloud means our security and privacy now depend on cloud providers. Remember: the cloud is just another way of saying “someone else’s computer.” Cloud providers are single points of failure and prime targets for hackers to scoop up everything from proprietary corporate communications to our personal photo albums and financial documents.

The risks we face from the cloud today are not an accident. For Google to show you your work emails, it has to store many copies across many servers. Even if they’re stored in encrypted form, Google must decrypt them to display your inbox on a webpage. When Zoom coordinates a call, its servers receive and then retransmit the video and audio of all the participants, learning who’s talking and what’s said. For Apple to analyze and share your photo album, it must be able to access your photos.

Hacks of cloud services happen so often that it’s hard to keep up. Breaches can be so large as to affect nearly every person in the country, as in the Equifax breach of 2017, or a large fraction of the Fortune 500 and the U.S. government, as in the SolarWinds breach of 2019-20.

It’s not just attackers we have to worry about. Some companies use their access—benefiting from weak laws, complex software, and lax oversight—to mine and sell our data.

[…]

The less someone knows, the less they can put you and your data at risk. In security this is called Least Privilege. The decoupling principle applies that idea to cloud services by making sure systems know as little as possible while doing their jobs. It states that we gain security and privacy by separating private data that today is unnecessarily concentrated.

To unpack that a bit, consider the three primary modes for working with our data as we use cloud services: data in motion, data at rest, and data in use. We should decouple them all.

Our data is in motion as we exchange traffic with cloud services such as videoconferencing servers, remote file-storage systems, and other content-delivery networks. Our data at rest, while sometimes on individual devices, is usually stored or backed up in the cloud, governed by cloud provider services and policies. And many services use the cloud to do extensive processing on our data, sometimes without our consent or knowledge. Most services involve more than one of these modes.

[…]

Cryptographer David Chaum first applied the decoupling approach in security protocols for anonymity and digital cash in the 1980s, long before the advent of online banking or cryptocurrencies. Chaum asked: how can a bank or a network service provider provide a service to its users without spying on them while doing so?

Chaum’s ideas included sending Internet traffic through multiple servers run by different organizations and divvying up the data so that a breach of any one node reveals minimal information about users or usage. Although these ideas have been influential, they have found only niche uses, such as in the popular Tor browser.

Trust, but Don’t Identify

The decoupling principle can protect the privacy of data in motion, such as financial transactions and Web browsing patterns that currently are wide open to vendors, banks, websites, and Internet Service Providers (ISPs).

Illustration of a process.

STORYTK

1. Barath orders Bruce’s audiobook from Audible. 2. His bank does not know what he is buying, but it guarantees the payment. 3. A third party decrypts the order details but does not know who placed the order. 4. Audible delivers the audiobook and receives the payment.

DECOUPLED E-COMMERCE: By inserting an independent verifier between the bank and the seller and by blinding the buyer’s identity from the verifier, the seller and the verifier cannot identify the buyer, and the bank cannot identify the product purchased. But all parties can trust that the signed payment is valid.

Illustration of a process

STORYTK

1. Bruce’s browser sends a doubly encrypted request for the IP address of sigcomm.org. 2. A third-party proxy server decrypts one layer and passes on the request, replacing Bruce’s identity with an anonymous ID. 3. An Oblivious DNS server decrypts the request, looks up the IP address, and sends it back in an encrypted reply. 4. The proxy server forwards the encrypted reply to Bruce’s browser. 5. Bruce’s browser decrypts the response to obtain the IP address of sigcomm.org.

DECOUPLED WEB BROWSING: ISPs can track which websites their users visit because requests to the Domain Name System (DNS), which converts domain names to IP addresses, are unencrypted. A new protocol called Oblivious DNS can protect users’ browsing requests from third parties. Each name-resolution request is encrypted twice and then sent to an intermediary (a “proxy”) that strips out the user’s IP address and decrypts the outer layer before passing the request to a domain name server, which then decrypts the actual request. Neither the ISP nor any other computer along the way can see what name is being queried. The Oblivious resolver has the key needed to decrypt the request but no information about who placed it. The resolver encrypts its reply so that only the user can read it.

Similar methods have been extended beyond DNS to multiparty-relay protocols that protect the privacy of all Web browsing through free services such as Tor and subscription services such as INVISV Relay and Apple’s iCloud Private Relay.

[…]

Meetings that were once held in a private conference room are now happening in the cloud, and third parties like Zoom see it all: who, what, when, where. There’s no reason a videoconferencing company has to learn such sensitive information about every organization it provides services to. But that’s the way it works today, and we’ve all become used to it.

There are multiple threats to the security of that Zoom call. A Zoom employee could go rogue and snoop on calls. Zoom could spy on calls of other companies or harvest and sell user data to data brokers. It could use your personal data to train its AI models. And even if Zoom and all its employees are completely trustworthy, the risk of Zoom getting breached is omnipresent. Whatever Zoom can do with your data in motion, a hacker can do to that same data in a breach. Decoupling data in motion could address those threats.

[…]

Most storage and database providers started encrypting data on disk years ago, but that’s not enough to ensure security. In most cases, the data is decrypted every time it is read from disk. A hacker or malicious insider silently snooping at the cloud provider could thus intercept your data despite it having been encrypted.

Cloud-storage companies have at various times harvested user data for AI training or to sell targeted ads. Some hoard it and offer paid access back to us or just sell it wholesale to data brokers. Even the best corporate stewards of our data are getting into the advertising game, and the decade-old feudal model of security—where a single company provides users with hardware, software, and a variety of local and cloud services—is breaking down.

Decoupling can help us retain the benefits of cloud storage while keeping our data secure. As with data in motion, the risks begin with access the provider has to raw data (or that hackers gain in a breach). End-to-end encryption, with the end user holding the keys, ensures that the cloud provider can’t independently decrypt data from disk.

[…]

Modern protocols for decoupled data storage, like Tim Berners-Lee’s Solid, provide this sort of security. Solid is a protocol for distributed personal data stores, called pods. By giving users control over both where their pod is located and who has access to the data within it—at a fine-grained level—Solid ensures that data is under user control even if the hosting provider or app developer goes rogue or has a breach. In this model, users and organizations can manage their own risk as they see fit, sharing only the data necessary for each particular use.

[…]

the last few years have seen the advent of general-purpose, hardware-enabled secure computation. This is powered by special functionality on processors known as trusted execution environments (TEEs) or secure enclaves. TEEs decouple who runs the chip (a cloud provider, such as Microsoft Azure) from who secures the chip (a processor vendor, such as Intel) and from who controls the data being used in the computation (the customer or user). A TEE can keep the cloud provider from seeing what is being computed. The results of a computation are sent via a secure tunnel out of the enclave or encrypted and stored. A TEE can also generate a signed attestation that it actually ran the code that the customer wanted to run.

With TEEs in the cloud, the final piece of the decoupling puzzle drops into place. An organization can keep and share its data securely at rest, move it securely in motion, and decrypt and analyze it in a TEE such that the cloud provider doesn’t have access. Once the computation is done, the results can be reencrypted and shipped off to storage. CPU-based TEEs are now widely available among cloud providers, and soon GPU-based TEEs—useful for AI applications—will be common as well.

[…]

Decoupling also allows us to look at security more holistically. For example, we can dispense with the distinction between security and privacy. Historically, privacy meant freedom from observation, usually for an individual person. Security, on the other hand, was about keeping an organization’s data safe and preventing an adversary from doing bad things to its resources or infrastructure.

There are still rare instances where security and privacy differ, but organizations and individuals are now using the same cloud services and facing similar threats. Security and privacy have converged, and we can usefully think about them together as we apply decoupling.

[…]

Decoupling isn’t a panacea. There will always be new, clever side-channel attacks. And most decoupling solutions assume a degree of noncollusion between independent companies or organizations. But that noncollusion is already an implicit assumption today: we trust that Google and Advanced Micro Devices will not conspire to break the security of the TEEs they deploy, for example, because the reputational harm from being found out would hurt their businesses. The primary risk, real but also often overstated, is if a government secretly compels companies to introduce backdoors into their systems. In an age of international cloud services, this would be hard to conceal and would cause irreparable harm.

[…]

Imagine that individuals and organizations held their credit data in cloud-hosted repositories that enable fine-grained encryption and access control. Applying for a loan could then take advantage of all three modes of decoupling. First, the user could employ Solid or a similar technology to grant access to Equifax and a bank only for the specific loan application. Second, the communications to and from secure enclaves in the cloud could be decoupled and secured to conceal who is requesting the credit analysis and the identity of the loan applicant. Third, computations by a credit-analysis algorithm could run in a TEE. The user could use an external auditor to confirm that only that specific algorithm was run. The credit-scoring algorithm might be proprietary, and that’s fine: in this approach, Equifax doesn’t need to reveal it to the user, just as the user doesn’t need to give Equifax access to unencrypted data outside of a TEE.

Building this is easier said than done, of course. But it’s practical today, using widely available technologies. The barriers are more economic than technical.

[…]

One of the challenges of trying to regulate tech is that industry incumbents push for tech-only approaches that simply whitewash bad practices. For example, when Facebook rolls out “privacy-enhancing” advertising, but still collects every move you make, has control of all the data you put on its platform, and is embedded in nearly every website you visit, that privacy technology does little to protect you. We need to think beyond minor, superficial fixes.

Decoupling might seem strange at first, but it’s built on familiar ideas. Computing’s main tricks are abstraction and indirection. Abstraction involves hiding the messy details of something inside a nice clean package: when you use Gmail, you don’t have to think about the hundreds of thousands of Google servers that have stored or processed your data. Indirection involves creating a new intermediary between two existing things, such as when Uber wedged its app between passengers and drivers.

The cloud as we know it today is born of three decades of increasing abstraction and indirection. Communications, storage, and compute infrastructure for a typical company were once run on a server in a closet. Next, companies no longer had to maintain a server closet, but could rent a spot in a dedicated colocation facility. After that, colocation facilities decided to rent out their own servers to companies. Then, with virtualization software, companies could get the illusion of having a server while actually just running a virtual machine on a server they rented somewhere. Finally, with serverless computing and most types of software as a service, we no longer know or care where or how software runs in the cloud, just that it does what we need it to do.

[…]

We’re now at a turning point where we can add further abstraction and indirection to improve security, turning the tables on the cloud providers and taking back control as organizations and individuals while still benefiting from what they do.

The needed protocols and infrastructure exist, and there are services that can do all of this already, without sacrificing the performance, quality, and usability of conventional cloud services.

But we cannot just rely on industry to take care of this. Self-regulation is a time-honored stall tactic: a piecemeal or superficial tech-only approach would likely undermine the will of the public and regulators to take action. We need a belt-and-suspenders strategy, with government policy that mandates decoupling-based best practices, a tech sector that implements this architecture, and public awareness of both the need for and the benefits of this better way forward.

Source: Essays: Decoupling for Security – Schneier on Security

Google Sues Men Who Weaponized DMCA Notices to Crush Competition

Two men who allegedly used 65 Google accounts to bombard Google with fraudulent DMCA takedown notices targeting up to 620,000 URLs, have been named in a Google lawsuit filed in California on Monday. Google says the men weaponized copyright law’s notice-and-takedown system to sabotage competitors’ trade, while damaging the search engine’s business and those of its customers.

dmca-google-s1While all non-compliant DMCA takedown notices are invalid by default, there’s a huge difference between those sent in error and others crafted for purely malicious purposes.

Bogus DMCA takedown notices are nothing new, but the rise of organized groups using malicious DMCA notices as a business tool has been apparent in recent years.

Since the vast majority of culprits facing zero consequences, that may have acted as motivation to send more. Through a lawsuit filed at a California court on Monday, Google appears to be sending the message that enough is enough.

Defendants Weaponized DMCA Takedowns

Google’s complaint targets Nguyen Van Duc and Pham Van Thien, both said to be residents of Vietnam and the leaders of up to 20 Doe defendants. Google says the defendants systematically abused accounts “to submit a barrage” of fraudulent copyright takedown requests aimed at removing their competitors’ website URLs from Google Search results.

[…]

The misrepresentations in notices sent to Google were potentially damaging to other parties too. Under fake names, the defendants falsely claimed to represent large companies such as Amazon, Twitter, and NBC News, plus sports teams including the Philadelphia Eagles, Los Angeles Lakers, San Diego Padres.

In similarly false notices, they claimed to represent famous individuals including Elon Musk, Taylor Swift, LeVar Burton, and Kanye West.

The complaint notes that some notices were submitted under company names that do not exist in the United States, at addresses where innocent families and businesses can be found. Google says that despite these claims, the defendants can be found in Vietnam from where they proudly advertise their ‘SEO’ scheme to others, including via YouTube.

[…]

Source: Google Sues Men Who Weaponized DMCA Notices to Crush Competition * TorrentFreak

Who would have thought that such a super poorly designed piece of copyright law would be used for this? Probably almost everyone who has been hit by a DMCA with no recourse is all. This is but a tiny tiny fraction of the iceberg, with the actual copyright holders at the top. The only way to stop this is by taking down the whole DMCA system.

AI weather forecaster complements traditional models very well

Global medium-range weather forecasting is critical to decision-making across many social and economic domains. Traditional numerical weather prediction uses increased compute resources to improve forecast accuracy, but does not directly use historical weather data to improve the underlying model. Here, we introduce “GraphCast,” a machine learning-based method trained directly from reanalysis data. It predicts hundreds of weather variables, over 10 days at 0.25° resolution globally, in under one minute. GraphCast significantly outperforms the most accurate operational deterministic systems on 90% of 1380 verification targets, and its forecasts support better severe event prediction, including tropical cyclones tracking, atmospheric rivers, and extreme temperatures. GraphCast is a key advance in accurate and efficient weather forecasting, and helps realize the promise of machine learning for modeling complex dynamical systems.

[…]

The dominant approach for weather forecasting today is “numerical weather prediction” (NWP), which involves solving the governing equations of weather using supercomputers.

[…]

NWP methods are improved by highly trained experts innovating better models, algorithms, and approximations, which can be a time-consuming and costly process.
Machine learning-based weather prediction (MLWP) offers an alternative to traditional NWP, where forecast models can be trained from historical data, including observations and analysis data.
[…]
In medium-range weather forecasting, i.e., predicting atmospheric variables up to 10 days ahead, NWP-based systems like the IFS are still most accurate. The top deterministic operational system in the world is ECMWF’s High RESolution forecast (HRES), a configuration of IFS which produces global 10-day forecasts at 0.1° latitude/longitude resolution, in around an hour
[…]
Here we introduce an MLWP approach for global medium-range weather forecasting called “GraphCast,” which produces an accurate 10-day forecast in under a minute on a single Google Cloud TPU v4 device, and supports applications including predicting tropical cyclone tracks, atmospheric rivers, and extreme temperatures.
[…]
A single weather state is represented by a 0.25° latitude/longitude grid
[…]
GraphCast is implemented as a neural network architecture, based on GNNs in an “encode-process-decode” configuration (13, 17), with a total of 36.7 million parameters (code, weights and demos can be found at https://github.com/deepmind/graphcast).
[…]
During model development, we used 39 years (1979–2017) of historical data from ECMWF’s ERA5 (21) reanalysis archive.
[…]
Of the 227 variable and level combinations predicted by GraphCast at each grid point, we evaluated its skill versus HRES on 69 of them, corresponding to the 13 levels of WeatherBench (8) and variables (23) from the ECMWF Scorecard (24)
[…]
We find that GraphCast has greater weather forecasting skill than HRES when evaluated on 10-day forecasts at a horizontal resolution of 0.25° for latitude/longitude and at 13 vertical levels.
[NOTE HRES has a resolution of 0.1°]
[…]
We also compared GraphCast’s performance to the top competing ML-based weather model, Pangu-Weather (16), and found GraphCast outperformed it on 99.2% of the 252 targets they presented (see supplementary materials section 6 for details).
[…]
GraphCast’s forecast skill and efficiency compared to HRES shows MLWP methods are now competitive with traditional weather forecasting methods
[…]
With 36.7 million parameters, GraphCast is a relatively small model by modern ML standards, chosen to keep the memory footprint tractable. And while HRES is released on 0.1° resolution, 137 levels, and up to 1 hour time steps, GraphCast operated on 0.25° latitude-longitude resolution, 37 vertical levels, and 6 hour time steps, because of the ERA5 training data’s native 0.25° resolution, and engineering challenges in fitting higher resolution data on hardware.
[…]
Our approach should not be regarded as a replacement for traditional weather forecasting methods, which have been developed for decades, rigorously tested in many real-world contexts, and offer many features we have not yet explored. Rather our work should be interpreted as evidence that MLWP is able to meet the challenges of real-world forecasting problems and has potential to complement and improve the current best methods.
[…]

Source: Learning skillful medium-range global weather forecasting | Science

Google Witness Spills on Apple’s Cut From Safari Search Revenue

Google pays Apple 36% of its search advertising revenue from Safari, according to new details brought to light in Google’s search antitrust trial on Monday as reported by Bloomberg. The mere utterance of the number, which Google and Apple have tried to keep sealed, caused Google’s main litigator John Schmidtlein to visibly cringe.

“Like the revenue share percentage itself, they are a commercially sensitive part of the financial terms of an agreement currently in effect,” said Google in a filing last week, hoping to keep the true number sealed from the public’s eye.

[…]

It’s well known that Google and Apple share revenue, but not in this much detail. In Pichai’s testimony, he said the search engine has tried to give users a “seamless and easy” experience, even if that meant paying exorbitant fees to do so. Court documents revealed this month show the 20 queries Google makes the most revenue on, including “iPhone,” “Auto insurance,” “Hulu,” and “AARP.”

Source: Google Witness Spills on Apple’s Cut From Safari Search Revenue

Micro-LED Displays @IDTechEx Report

[…]

IDTechEx’s reportMicro-LED Displays 2024-2034: Technology, Commercialization, Opportunity, Market and Players‘ explores various angles of Micro-LED displays.

[…]

MicroLED displays are built on the foundation of self-emissive inorganic LEDs, acting as subpixels. These LEDs are usually in the micrometer range, without package nor substrate, and therefore are transferred in a way different from traditional pick & place techniques.

The key to Micro-LED’s success lies in its unique value propositions. Not only do these displays offer stunning visual clarity, high luminance, fast refresh rate, low power consumption, high dynamic range, and high contrast, but they also provide transparency, seamless connections, sensor integration, and the promise of an extended lifetime. Such features make Micro-LED a game-changer in the display industry.

While the disruption begins with Micro-LED, it does not end there. These displays not only meet the demands of existing applications but also create entirely new possibilities.

For the former, eight applications are addressed most: augmented/mixed reality (AR/MR), virtual reality (VR), large video displays, TVs and monitors, automotive displays, mobile phones, smartwatches and wearables, tablets, and laptops.

IDTechEx have recently observed a clear trend that most efforts are put on only a few applications such as large video displays/large TVs, Smartwatches/wearables, and augmented reality.

When talking about Mini-LED and Micro-LED, the LED size is a very common feature to distinguish the two. Both Mini-LED and Micro-LED are based on inorganic LEDs. As the names indicate, Mini-LEDs are considered as LEDs in the millimeter range, while Micro-LEDs are in the micrometer range. However, the distinction is not so strict in reality, and the definition may vary from person to person. However, it is commonly accepted that micro-LEDs are under 100 µm and even under 50 µm. While mini-LEDs are much larger.

When applied in the display industry, size is just one factor when talking about Mini-LED and Micro-LED displays. Another feature is the LED thickness and substrate. Mini-LEDs usually have a large thickness of over 100 µm, largely due to the existence of an LED substrate. While Micro-LEDs are usually substrateless, and therefore the finished LEDs are extremely thin.

A third feature that is used to distinguish the two is the mass transfer techniques that are utilized to handle the LEDs. Mini-LEDs usually adopt conventional pick-and-place techniques, including surface mounting technology. Every time, the number of LEDs that can be transferred is limited. For Micro-LEDs, millions of LEDs usually need to be transferred when a heterogenous target substrate is used; therefore, the number of LEDs to be transferred at a time is significantly larger, and thus, a disruptive mass transfer technique should be considered.

[…]

Source: DailyDOOH » Blog Archive » Micro-LED Displays @IDTechEx Report

New Israeli Law Makes Consuming ‘Terrorist’ Content A Criminal Offense

It’s amazing just how much war and conflict can change a country. On October 7th, Hamas blitzed Israel with an attack that was plainly barbaric. Yes, this is a conflict that has been simmering with occasional flashpoints for decades. No, neither side can even begin to claim it has entirely clean hands as a result of those decades of conflict. We can get the equivocating out of the way. October 7th was different, the worst single day of murder of the Jewish community since the Holocaust. And even in the immediate aftermath, those outside of Israel and those within knew that the attack was going to result in both an immediate reaction from Israel and longstanding changes within its borders. And those of us from America, or those that witnessed how our country reacted to 9/11, knew precisely how much danger this period of change represented.

It’s already started. First, Israel loosened the reigns to allow once-blacklisted spyware companies to use their tools to help Israel find the hundreds of hostages Hamas claims to have taken. While that goal is perfectly noble, of course, the willingness to engage with more nefarious tools to achieve that end had begun. And now we learn that Israel’s government has taken the next step in amending its counterterrorism laws to make the consumption of “terrorist” content a criminal offense, punishable with jail time.

The bill, which was approved by a 13-4 majority in the Knesset, is a temporary two-year measure that amends Article 24 of the counterterrorism law to ban the “systematic and continuous consumption of publications of a terrorist organization under circumstances that indicate identification with the terrorist organization”.

It identifies the Palestinian group Hamas and the ISIL (ISIS) group as the “terrorist” organisations to which the offence applies. It grants the justice minister the authority to add more organisations to the list, in agreement with the Ministry of Defence and with the approval of the Knesset’s Constitution, Law, and Justice Committee.

Make no mistake, this is the institution of thought crime. Read those two paragraphs one more time and realize just how much the criminalization of consumption of materials relies on the judgement and interpretation of those enforcing it. What is systematic in terms of this law? What is a publication? What constitutes a “terrorist organization,” not in the case of Hamas and ISIL, but in that ominous bit at the end of the second paragraph, where more organizations can — and will — be added to this list?

And most importantly, how in the world is the Israeli government going to determine “circumstances that indicate identification with the terrorist organization?”

“This law is one of the most intrusive and draconian legislative measures ever passed by the Israeli Knesset since it makes thoughts subject to criminal punishment,” said Adalah, the Legal Centre for Arab Minority Rights in Israel. It warned that the amendment would criminalise “even passive social media use” amid a climate of surveillance and curtailment of free speech targeting Palestinian citizens of Israel.

“This legislation encroaches upon the sacred realm of an individual’s personal thoughts and beliefs and significantly amplifies state surveillance of social media use,” the statement added. Adalah is sending a petition to the Supreme Court to challenge the bill.

This has all the hallmarks of America’s overreaction to the 9/11 attacks. We still haven’t unwound, not even close, all of the harm that was done in the aftermath of those attacks, all in the name of safety. We are still at a net-negative value in terms of our civil liberties due to that overreaction. President Biden even reportedly warned Israel not to ignore our own mistakes, but they’re doing it anyway.

And circling back to the first quotation and the claim that this law is temporary over a 2 year period, that’s just not how this works. If this law is allowed to continue to exist, it will be extended, and then extended again. The United States is still operating under the Authorization for Use of Military Force of 2001 and used it in order to conduct strikes in Somalia under the Biden administration, two decades later.

The right to speech and thought is as bedrock a thing as exists for a democracy. If we accept that premise, then it is simply impossible to “protect a democracy” by limiting the rights of speech and thought. And that’s precisely what this new law in Israel does: it chips away at the democracy of the state in order to protect it.

That’s not how Israel wins this war, if that is in fact the goal.

Source: New Israeli Law Makes Consuming ‘Terrorist’ Content A Criminal Offense | Techdirt

US Navy Uncrewed Submarine Will Launch, Recover Drone That Can Swim, Fly

The U.S. Navy is set to demonstrate the ability of an uncrewed underwater vehicle, or UUV, to launch and recover a smaller drone that can both swim and fly. The service says it wants the two platforms to be able to go through the deployment and retrieval processes autonomously — without any human involvement.

The Office of Naval Research (ONR) announced today that it had hired SubUAS to “develop and demonstrate launch and recovery capabilities of the Naviator from and to a UUV (using a UUV surrogate).” The total value of the contract, which was formally awarded on November 8, is nearly $3.7 million, if all options are exercised.

What ONR is currently referring to as the Subsurface Autonomous Naviator Delivery (SAND) system must be able to launch and recover the Naviator “without a human-in-the-loop,” according to a brief statement about the deal with SubUAS.

[…]

“Naviator is scalable to multiple sizes, with a 16-foot wingspan and 0-90+ lbs payload, and is optimized for a variety of sensors, cameras, and other payloads. Naviator is faster to deploy than existing underwater Remote Operating Vehicles (ROVs), and is also able to reach its target faster via flight,” according to a 2020 U.S. government press release. “It has longer embedded mission capabilities than similarly sized drones, and utilizes precise GPS and visual position hold, as well as power-saving buoy sentry mode. The platform can easily surface, send data, receive new instructions, and begin a new mission.”

The same release also said that Naviator was capable of “tetherless operation with remote pilot control, and the ability to conduct autonomous missions.” SubUAS’s website notes that smaller versions of the drone could be used in swarms.

A rendering from SubUAS showing another Naviator configuration. <em>SubUAS</em>

A rendering from SubUAS showing another Naviator configuration. SubUAS

SubUAS has said in the past that existing Naviator types are capable of reaching underwater speeds of up to 3.5 knots, and could potentially get up to 10 knots depending on their size and configuration. It’s unclear how fast the drone can fly in its aerial mode.

[…]

“Mines are probably the biggest problem for the Navy,” Diez, the professor at Rutgers behind the Naviator design, said back in 2015. “They need to map where mines are. Now there are a lot of false positives. This could be a better technology to rapidly investigate these potential threats.”

A graphic depicting, in very general terms, how a Naviator might help locate mines in its underwater mode, surface to transmit that data back to friendly forces, and then go back down below the waves to continue searching for more threats. <em>SubUAS</em>

A graphic depicting, in very general terms, how a Naviator might help locate mines in its underwater mode, surface to transmit that data back to friendly forces, and then go back down below the waves to continue searching for more threats. SubUAS

In a naval context, “the drones could emerge quickly from the depths, get a quick glimpse of enemy ship deployments, and then hide again,” a news item from Rutgers at that time further noted. “An air-and-water drone could also help engineers inspect underwater structures, such as bridge and dock piers, ship hulls and oil drilling platforms.”

In this role, Naviator could help protect friendly forces by checking the hulls of ships and coastal infrastructure below the waterline for evidence of mines being placed or other signs of hostile infiltration.

A rendering depicting a Naviator drone inspecting underwater oil or natural gas-related infrastructure. <em>SubUAS</em>

A rendering depicting a Naviator drone inspecting underwater oil or natural gas-related infrastructure. SubUAS

Naviators could help with search and rescue missions, too. “For instance, the vehicle could scan the water from above to locate missing swimmers and sailors, and upon spotting shipwreck debris could dip underwater to further examine the scene,” Rutgers’ 2015 news item notes.

There are also various potential civilian scientific research and commercial applications for the Naviator.

For the U.S. Navy, being able to employ Naviators in swarms and deploy them discreetly using UUVs, which themselves could be launched via crewed submarines, opens up additional possibilities and offers additional operational flexibility. For instance, a swarm of Naviators could scour a broader area around the UUV for threats and do so relatively rapidly.

[…]

In 2021, ONR awarded a separate contract to Raytheon to demonstrate its ability to launch versions of its Block 3 Coyote drone configured as loitering munitions, also known as kamikaze drones, from UUVs and uncrewed surface vessels (USV). The same year, the Navy announced its intention to buy unarmed 120 AeroVironment Blackwing submarine-launched drones. American submarines have had a proven ability to launch smaller fixed-wing drones for surveillance for many years now.

The Navy also said just last week it hopes, as part of a program called Razorback, to begin fielding a new UUV that can be launched and recovered using the torpedo tubes on its existing crewed submarines within a year. This follows the cancellation of the Snakehead UUV program last year in part due to that design being too large to find inside a standard torpedo tube, limiting the options for deployment and retrieval. The Navy has developed other torpedo-tube-launched drones in the past, but these have typically not been readily recoverable by the same means.

Another Navy program, called Orca, is also pushing ahead with the development of a large-displacement UUV that is not intended to be launched or recovered via a torpedo tube. The Navy also has various smaller UUVs in service and in development.

In recent years, the U.S. military has been exploring options for launching aerial drones configured to perform various missions, including in swarms, from a host of other platforms, including ground-based systems, crewed surface warships, traditional fixed-wing aircraft and helicopters, and even high-altitude balloons.

It remains to be seen what will come from the Navy’s new project to launch and recover Naviators from other underwater drones, and do so without the need for direct human involvement. What is clear is that this effort is completely in line with the kind of capabilities the service is pushing to field in the near term.

Source: Uncrewed Submarine Will Launch, Recover Drone That Can Swim, Fly

In a first, cryptographic keys protecting SSH connections stolen in new attack

For the first time, researchers have demonstrated that a large portion of cryptographic keys used to protect data in computer-to-server SSH traffic are vulnerable to complete compromise when naturally occurring computational errors occur while the connection is being established.

Underscoring the importance of their discovery, the researchers used their findings to calculate the private portion of almost 200 unique SSH keys they observed in public Internet scans taken over the past seven years. The researchers suspect keys used in IPsec connections could suffer the same fate. SSH is the cryptographic protocol used in secure shell connections that allows computers to remotely access servers, usually in security-sensitive enterprise environments. IPsec is a protocol used by virtual private networks that route traffic through an encrypted tunnel.

The vulnerability occurs when there are errors during the signature generation that takes place when a client and server are establishing a connection. It affects only keys using the RSA cryptographic algorithm, which the researchers found in roughly a third of the SSH signatures they examined. That translates to roughly 1 billion signatures out of the 3.2 billion signatures examined. Of the roughly 1 billion RSA signatures, about one in a million exposed the private key of the host.

While the percentage is infinitesimally small, the finding is nonetheless surprising for several reasons—most notably because most SSH software in use has deployed a countermeasure for decades that checks for signature faults before sending a signature over the Internet. Another reason for the surprise is that until now, researchers believed that signature faults exposed only RSA keys used in the TLS—or Transport Layer Security—protocol encrypting Web and email connections. They believed SSH traffic was immune from such attacks because passive attackers—meaning adversaries simply observing traffic as it goes by—couldn’t see some of the necessary information when the errors happened.

[…]

The new findings are laid out in a paper published earlier this month titled “Passive SSH Key Compromise via Lattices.” It builds on a series of discoveries spanning more than two decades. In 1996 and 1997, researchers published findings that, taken together, concluded that when naturally occurring computational errors resulted in a single faulty RSA signature, an adversary could use it to compute the private portion of the underlying key pair.

The reason: By comparing the malformed signature with a valid signature, the adversary could perform a GCD—or greatest common denominator—mathematical operation that, in turn, derived one of the prime numbers underpinning the security of the key. This led to a series of attacks that relied on actively triggering glitches during session negotiation, capturing the resulting faulty signature and eventually compromising the key. Triggering the errors relied on techniques such as tampering with a computer’s power supply or shining a laser on a smart card.

Then, in 2015, a researcher showed for the first time that attacks on keys used during TLS sessions were possible even when an adversary didn’t have physical access to the computing device. Instead, the attacker could simply connect to the device and opportunistically wait for a signature error to occur on its own. Last year, researchers found that even with countermeasures added to most TLS implementations as long as two decades earlier, they were still able to passively observe faulty signatures that allowed them to compromise the RSA keys of a small population of VPNs, network devices, and websites, most notably Baidu.com, a top-10 Alexa property.

[…]

The attack described in the paper published this month clears the hurdle of missing key material exposed in faulty SSH signatures by harnessing an advanced cryptanalytic technique involving the same mathematics found in lattice-based cryptography. The technique was first described in 2009, but the paper demonstrated only that it was theoretically possible to recover a key using incomplete information in a faulty signature. This month’s paper implements the technique in a real-world attack that uses a naturally occurring corrupted SSH signature to recover the underlying RSA key that generated it.

[…]

The researchers traced the keys they compromised to devices that used custom, closed-source SSH implementations that didn’t implement the countermeasures found in OpenSSH and other widely used open source code libraries. The devices came from four manufacturers: Cisco, Zyxel, Hillstone Networks, and Mocana.

[…]

Once attackers have possession of the secret key through passive observation of traffic, they can mount an active Mallory-in-the-middle attack against the SSH server, in which they use the key to impersonate the server and respond to incoming SSH traffic from clients. From there, the attackers can do things such as recover the client’s login credentials. Similar post-exploit attacks are also possible against IPsec servers if faults expose their private keys.

[…]

a single flip of a bit—in which a 0 residing in a memory chip register turns to 1 or vice versa—is all that’s required to trigger an error that exposes a secret RSA key. Consequently, it’s crucial that the countermeasures that detect and suppress such errors work with near-100 percent accuracy

[…]

Source: In a first, cryptographic keys protecting SSH connections stolen in new attack | Ars Technica