The Unforeseen Consequences of Artificial Intelligence (AI) on Society: A Systematic Review of Regulatory Gaps Generated by AI in the U.S. | RAND

AI’s growing catalog of applications and methods has the potential to profoundly affect public policy by generating instances where regulations are not adequate to confront the issues faced by society, also known as regulatory gaps.

The objective of this dissertation is to improve our understanding of how AI influences U.S. public policy. It systematically explores, for the first time, the role of AI in the generation of regulatory gaps. Specifically, it addresses two research questions:

  1. What U.S. regulatory gaps exist due to AI methods and applications?
  2. When looking across all of the gaps identified in the first research question, what trends and insights emerge that can help stakeholders plan for the future?

These questions are answered through a systematic review of four academic databases of literature in the hard and social sciences. Its implementation was guided by a protocol that initially identified 5,240 candidate articles. A screening process reduced this sample to 241 articles (published between 1976 and February of 2018) relevant to answering the research questions.

This dissertation contributes to the literature by adapting the work of Bennett-Moses and Calo to effectively characterize regulatory gaps caused by AI in the U.S. In addition, it finds that most gaps: do not require new regulation or the creation of governance frameworks for their resolution, are found at the federal and state levels of government, and AI applications are recognized more often than methods as their cause.

Source: The Unforeseen Consequences of Artificial Intelligence (AI) on Society: A Systematic Review of Regulatory Gaps Generated by AI in the U.S. | RAND

A Facebook Account Will Be Mandatory for Oculus Devices

It’s official. Starting this October, a Facebook account will be mandatory for all future Oculus headsets. While there’ll be a grace period for anyone with a separate Oculus account, Facebook will end support for those on January 1, 2023.

The decision was announced today on both Oculus’s Twitter and in a press release. The gist of it is anyone who is new to an Oculus device after October must log in with a Facebook account. At that time, existing Oculus users will have the option of merging their Facebook and Oculus accounts. Anyone who doesn’t merge will have two years before their Oculus accounts are kaput. The devices will technically still work, but “full functionality will require a Facebook account.”

Notably, all future, unreleased Oculus devices will also require a Facebook account, regardless of whether you already have an Oculus account. This is perhaps a reference to the rumored successor to the Oculus Quest, which leaks suggest may launch as early as September 15.

What about things you already purchased on your Oculus account? Well, Facebook says it will “take steps” to allow folks to keep the things they’ve already bought but it “expect[s] some games and apps may no longer work,” hinting that developers may decide to include features that require a Facebook account or just stop supporting the app or game in question.

As you might imagine, the replies to Oculus’s announcement on Twitter are less than kind. In a few instances, users cried foul, pointing to a promise from founder Palmer Luckey when Facebook acquired Oculus that people wouldn’t need to log into Facebook when they wanted to use the Oculus Rift. While the move is painted as a means of streamlining the VR experience by “giving people a single way to log in,” it’s also a blatant attempt at forcing people onto Facebook’s platform so it can get your sweet, sweet data.

This has been coming for some time. Last year, the Oculus platform got a boatload of social features that no one asked for. It required a Facebook login to work and introduced an element of data harvesting for targeted ads.

[…]

Source: A Facebook Account Will Be Mandatory for Future Oculus Devices

AI Company Leaks Over 2.5M Medical Records

A security researcher has detailed how an artificial intelligence company in possession of nearly 2.6 million medical records allowed them to be publicly visible on the internet. It’s a clear reminder that our personal health data is not safe.

As Secure Thoughts reports, on July 7 security researcher Jeremiah Fowler discovered two folders of medical records available for anyone to access on the internet. The data was labeled as “staging data” and hosted by artificial intelligence company Cense AI, which specializes in “SaaS-based intelligent process automation management solutions.” Fowler believes the data was made public because Cense AI was temporarily hosting it online before loading it into the company’s management system or an AI bot.

The medical records are quite detailed and include names, insurance records, medical diagnosis notes, and payment records. It looks as though the data was sourced from insurance companies and relates to car accident claims and referrals for neck and spine injuries. The majority of the personal information is thought to be for individuals located in New York, with a total of 2,594,261 records exposed.

[…]

Source: Report: AI Company Leaks Over 2.5M Medical Records | PCMag

Researchers Can Duplicate Keys from the Sounds They Make in Locks

Researchers have demonstrated that they can make a working 3D-printed copy of a key just by listening to how the key sounds when inserted into a lock. And you don’t need a fancy mic — a smartphone or smart doorbell will do nicely if you can get it close enough to the lock.

Key Audio Lockpicking

The next time you unlock your front door, it might be worth trying to insert your key as quietly as possible; researchers have discovered that the sound of your key being inserted into the lock gives attackers all they need to make a working copy of your front door key.

It sounds unlikely, but security researchers say they have proven that the series of audible, metallic clicks made as a key penetrates a lock can now be deciphered by signal processing software to reveal the precise shape of the sequence of ridges on the key’s shaft. Knowing this (the actual cut of your key), a working copy of it can then be three-dimensionally (3D) printed.

How Soundarya Ramesh and her team accomplished this is a fascinating read.

Once they have a key-insertion audio file, SpiKey’s inference software gets to work filtering the signal to reveal the strong, metallic clicks as key ridges hit the lock’s pins [and you can hear those filtered clicks online here]. These clicks are vital to the inference analysis: the time between them allows the SpiKey software to compute the key’s inter-ridge distances and what locksmiths call the “bitting depth” of those ridges: basically, how deeply they cut into the key shaft, or where they plateau out. If a key is inserted at a nonconstant speed, the analysis can be ruined, but the software can compensate for small speed variations.

The result of all this is that SpiKey software outputs the three most likely key designs that will fit the lock used in the audio file, reducing the potential search space from 330,000 keys to just three. “Given that the profile of the key is publicly available for commonly used [pin-tumbler lock] keys, we can 3D-print the keys for the inferred bitting codes, one of which will unlock the door,” says Ramesh.

Source: Researchers Can Duplicate Keys from the Sounds They Make in Locks

Securus sued for ‘recording attorney-client jail calls, handing them to cops’ – months after settling similar lawsuit and charging more than 100x normal price for the calls. Hey, monopolies!

Jail phone telco Securus provided recordings of protected attorney-client conversations to cops and prosecutors, it is claimed, just three months after it settled a near-identical lawsuit.

The corporate giant controls all telecommunications between the outside world and prisoners in American jails that contract with it. It charges far above market rate, often more than 100 times, while doing so.

It has now been sued by three defense lawyers in Maine, who accuse the corporation of recording hundreds of conversations between them and their clients – something that is illegal in the US state. It then supplied those recordings to jail administrators and officers of the law, the attorneys allege.

Though police officers can request copies of convicts’ calls to investigate crimes, the cops aren’t supposed to get attorney-client-privileged conversations. In fact, these chats shouldn’t be recorded in the first place. Yet, it is claimed, Securus not only made and retained copies of these sensitive calls, it handed them to investigators and prosecutors.

“Securus failed to screen out attorney-client privileged calls, and then illegally intercepted these calls and distributed them to jail administrators who are often law enforcers,” the lawsuit [PDF] alleged. “In some cases the recordings have been shared with district attorneys.”

The lawsuit claims that over 800 calls covering 150 inmates and 30 law firms have been illegally recorded in the past 12 months, and it provides a (redacted) spreadsheet of all relevant calls.

[…]

Amazingly, this is not the first time Securus has been accused of this same sort of behavior. Just three months ago, in May this year, the company settled a similar class-action lawsuit this time covering jails in California.

That time, two former prisoners and a criminal defense attorney sued Securus after it recorded more than 14,000 legally protected conversations between inmates and their legal eagles. Those recordings only came to light after someone hacked the corp’s network and found some 70 million stored conversations, which were subsequently leaked to journalists.

[…]

Securus has repeatedly come under fire for similar complaints of ethical and technological failings. It was at the center of a huge row over location data after it was revealed it was selling location data on people’s phones to the police through a web portal.

The telecoms giant was also criticized for charging huge rates for video calls, between $5.95 and $7.99 for a 20-minute call, at a jail where the warden banned in-person visits but still required relatives to travel to the jail and sit in a trailer in the prison’s parking lot to talk to their loved ones through a screen.

Securus is privately held so it doesn’t make its financial figures public. A leak in 2014 revealed that it made a $115m profit on $405m in revenue for that year.

Source: Securus sued for ‘recording attorney-client jail calls, handing them to cops’ – months after settling similar lawsuit • The Register

Android 11 is taking away the camera picker, forcing people to only use the built-in camera

Android may have started with the mantra that developers are allowed to do anything as long as they can code it, but things have changed over the years as security and privacy became higher priorities. Every major update over the last decade has shuttered features or added restrictions in the name of protecting users, but some sacrifices may not have been entirely necessary. Another Android 11 trade-off has emerged, this time taking away the ability for users to select third-party camera apps to take pictures or videos on behalf of other apps, forcing users to rely only on the built-in camera app.

At the heart of this change is one of the defining traits of Android: the Intent system. Let’s say you need to take a picture of a novelty coffee mug to sell through an auction app. Since the auction app wasn’t built for photography, the developer chose to leave that up to a proper camera app. This where the Intent system comes into play. Developers simply create a request with a few criteria and Android will prompt users to pick from a list of installed apps to do the job.

Camera picker on Android 10.

However, things are going to change with Android 11 for apps that ask for photos or videos. Three specific intents will cease to work like they used to, including: VIDEO_CAPTURE, IMAGE_CAPTURE, and IMAGE_CAPTURE_SECURE. Android 11 will now automatically provide the pre-installed camera app to perform these actions without ever searching for other apps to fill the role.

Starting in Android 11, only pre-installed system camera apps can respond to the following intent actions:

If more than one pre-installed system camera app is available, the system presents a dialog for the user to select an app. If you want your app to use a specific third-party camera app to capture images or videos on its behalf, you can make these intents explicit by setting a package name or component for the intent.

Google describes the change in a list of new behaviors in Android 11, and further confirmed it in the Issue Tracker. Privacy and security are cited as the reason, but there’s no discussion about what exactly made those intents dangerous. Perhaps some users were tricked into setting a malicious camera app as the default and then using it to capture things that should have remained private.

“… we believe it’s the right trade-off to protect the privacy and security of our users.” — Google Issue Tracker.

Not only does Android 11 take the liberty of automatically launching the pre-installed camera app when requested, it also prevents app developers from conveniently providing their own interface to simulate the same functionality. I ran a test with some simple code to query for the camera apps on a phone, then ran it on devices running Android 10 and 11 with the same set of camera apps installed. Android 10 gave back a full set of apps, but Android 11 reported nothing, not even Google’s own pre-installed Camera app.

Above: Debugger view on Android 10. Below: Same view on Android 11.

As Mark Murphy of CommonsWare points out, Google does prescribe a workaround for developers, although it’s not very useful. The documentation advises explicitly checking for installed camera apps by their package names — meaning developers would have to pick preferred apps up front — and sending users to those apps directly. Of course, there are other ways to get options without identifying all package names, like getting a list of all apps and then manually searching for intent filters, but this seems like an over-complication.

Source: Android 11 is taking away the camera picker, forcing people to only use the built-in camera

Transparent solar panels for windows hit record 8% efficiency

In a step closer to skyscrapers that serve as power sources, a team led by University of Michigan researchers has set a new efficiency record for color-neutral, transparent solar cells.

The team achieved 8.1% efficiency and 43.3% transparency with an organic, or carbon-based, design rather than conventional silicon. While the cells have a slight green tint, they are much more like the gray of sunglasses and automobile windows.

“Windows, which are on the face of every building, are an ideal location for organic solar cells because they offer something silicon can’t, which is a combination of very high efficiency and very high visible transparency,” said Stephen Forrest, the Peter A. Franken Distinguished University Professor of Engineering and Paul G. Goebel Professor of Engineering, who led the research.

Yongxi Li holds up vials containing the polymers used to make the transparent solar cells. Image credit: Robert Coelius, Michigan Engineering Communications & Marketing

Yongxi Li holds up vials containing the polymers used to make the transparent solar cells. Image credit: Robert Coelius, Michigan Engineering Communications & Marketing

Buildings with glass facades typically have a coating on them that reflects and absorbs some of the light, both in the visible and infrared parts of the spectrum, to reduce the brightness and heating inside the building. Rather than throwing that energy away, transparent solar panels could use it to take a bite out of the building’s electricity needs. The transparency of some existing windows is similar to the transparency of the solar cells Forrest’s group reports in the journal Proceedings of the National Academy of Sciences.

[…]

The color-neutral version of the device was made with an indium tin oxide electrode. A silver electrode improved the efficiency to 10.8%, with 45.8% transparency. However, that version’s slightly greenish tint may not be acceptable in some window applications.

Transparent solar cells are measured by their light utilization efficiency, which describes how much energy from the light hitting the window is available either as electricity or as transmitted light on the interior side. Previous transparent solar cells have light utilization efficiencies of roughly 2-3%, but the indium tin oxide cell is rated at 3.5% and the silver version has a light utilization efficiency of 5%.

Both versions can be manufactured at large scale, using materials that are less toxic than other transparent solar cells. The transparent organic solar cells can also be customized for local latitudes, taking advantage of the fact that they are most efficient when the sun’s rays are hitting them at a perpendicular angle. They can be placed in between the panes of double-glazed windows..

Source: Transparent solar panels for windows hit record 8% efficiency | University of Michigan News

Trusting OpenPGP and S/Mime with your email secrets? You might want to rethink that

Boffins testing the security of OpenPGP and S/MIME, two end-to-end encryption schemes for email, recently found multiple vulnerabilities in the way email client software deals with certificates and key exchange mechanisms.

They found that five out of 18 OpenPGP-capable email clients and six out of 18 S/MIME-capable clients are vulnerable to at least one attack.

These flaws are not due to cryptographic weaknesses. Rather they arise from the complexity of email infrastructure, based on dozens of standards documents, as it has evolved over time, and the impact that’s had on the way affected email clients handle certificates and digital signatures.

In a paper [PDF] titled “Mailto: Me Your Secrets. On Bugs and Features in Email End-to-End Encryption,” presented earlier this summer at the virtual IEEE Conference on Communications and Network Security, Jens Müller, Marcus Brinkmann, and Joerg Schwenk (Ruhr University Bochum, Germany) and Damian Poddebniak and Sebastian Schinzel (Münster University of Applied Sciences, Germany) reveal how they were able to conduct key replacement, MITM decryption, and key exfiltration attacks on various email clients.

“We show practical attacks against both encryption schemes in the context of email,” the paper explains.

“First, we present a design flaw in the key update mechanism, allowing a third party to deploy a new key to the communication partners. Second, we show how email clients can be tricked into acting as an oracle for decryption or signing by exploiting their functionality to auto-save drafts. Third, we demonstrate how to exfiltrate the private key, based on proprietary mailto parameters implemented by various email clients.”

This is not the sort of thing anyone trying to communicate securely over email wants because it means encrypted messages may be readable by an attacker and credentials could be stolen.

Müller offers a visual demonstration via Twitter on Tuesday:

The research led to CVEs for GNOME Evolution (CVE-2020-11879), KDE KMail (CVE-2020-11880), and IBM/HCL Notes (CVE-2020-4089). There are two more CVEs (CVE-2020-12618, and CVE-2020-12619) that haven’t been made public.

According to Müller, affected vendors were notified of the vulnerabilities in February.

Pegasus Mail is said to be affected though it doesn’t have a designated CVE – it may be that one of the unidentified CVEs applies here.

Thunderbird versions 52 and 60 for Debian/Kali Linux were affected but more recent versions are supposed to be immune since the email client’s developers fixed the applicable flaw last year. It allowed a website to present a link with the "mailto?attach=..." parameter to force Thunderbird to attach local files, like an SSH private key, to an outgoing message.

However, those who have installed the xdg-utils package, a set of utility scripts that provide a way to launch an email application in response to a mailto: link, appear to have reactivated this particular bug, which has yet to be fixed in xdg-utils.

Source: Trusting OpenPGP and S/Mime with your email secrets? You might want to rethink that • The Register