Android users in Singapore to be blocked from installing apps from 3rd parties

SINGAPORE – Android users here will be blocked from installing apps from unverified sources, a process called sideloading, as part of a new trial by Google to crack down on malware scams.

The security tool will work in the background to detect apps that demand suspicious permissions, like those that grant the ability to spy on screen content or read SMS messages, which scammers have been known to abuse to intercept one-time passwords.

Singapore is the first country to begin the gradual roll-out of the security feature over the next few weeks, done in collaboration with the Cyber Security Agency of Singapore, according to a statement on Feb 7 by Google, which develops the Android software.

The update will progressively arrive on all Android users’ devices and will be enabled by default through Google Play Protect, said Google’s director of android security strategy Eugene Liderman, in reply to questions by The Straits Times.

Users who are blocked from downloading a suspicious app will be notified with an explanation.

Users cannot deactivate the pilot feature without disabling all of Google Play Protect, said Mr Liderman, adding that deactivation of the program, which scans Android devices for harmful behaviour like suspicious apps, is not recommended for user safety.

[…]

The update, which will be automatically activated, will roll out to all Android devices with Google Play services – a security program built into Android devices that scans for potentially harmful apps – here, starting with a small number of users to assess the effectiveness of the tool, he said.

Sideloaded apps can come in the form of apps used by overseas businesses that do not use the Google ecosystem, to device customisation tools and free versions of paid apps.

[…]

The feature marks Google’s most heavy-handed feature to stamp out malicious sideloaded apps.

[…]

Samsung, which runs on Android, also launched Auto Blocker for Samsung Galaxy device users who are using the One UI 6 software in November. The tool, which has to be activated in the settings menu, bars sideloaded apps from unverified sources.

Source: Android users in S’pore to be blocked from installing unverified apps as part of anti-scam trial | The Straits Times

So basically they are citing user safetly to limit what you do on your phone and enforce their marketplace monopoly. Something both Apple and Google have been slammed with explicitly in the EU and US as part of antitrust lawsuits – which they have lost.

Of course, Google Play Protect is itself spyware – everything it scans (which is your whole phone) is sent to Google without an opt out. So you can decide to stop this insanity by disabling the Google Spyware.

The EU wants to criminalize AI-generated deepfakes and the non-consensual sending of intimate images

[…] the European Council and Parliament have agreed with the proposal to criminalize, among other things, different types of cyber-violence. The proposed rules will criminalize the non-consensual sharing of intimate images, including deepfakes made by AI tools, which could help deter revenge porn. Cyber-stalking, online harassment, misogynous hate speech and “cyber-flashing,” or the sending of unsolicited nudes, will also be recognized as criminal offenses.

The commission says that having a directive for the whole European Union that specifically addresses those particular acts will help victims in Member States that haven’t criminalized them yet. “This is an urgent issue to address, given the exponential spread and dramatic impact of violence online,” it wrote in its announcement.

[…]

In its reporting, Politico suggested that the recent spread of pornographic deepfake images using Taylor Swift’s face urged EU officials to move forward with the proposal.

[…]

“The final law is also pending adoption in Council and European Parliament,” the EU Council said. According to Politico, if all goes well and the bill becomes a law soon, EU states will have until 2027 to enforce the new rules.

Source: The EU wants to criminalize AI-generated porn images and deepfakes

The original article has a seriously misleading title, I guess for clickbait.

COPD: Inhalable nanoparticles could help treat chronic lung disease

Delivering medication to the lungs with inhalable nanoparticles may help treat chronic obstructive pulmonary disease (COPD). In mice with signs of the condition, the treatment improved lung function and reduced inflammation.

COPD causes the lungs’ airways to become progressively narrower and more rigid, obstructing airflow and preventing the clearance of mucus. As a result, mucus accumulates in the lungs, attracting bacterial pathogens that further exacerbate the disease.

This thick mucus layer also traps medications, making it challenging to treat infections. So, Junliang Zhu at Soochow University in China and his colleagues developed inhalable nanoparticles capable of penetrating mucus to deliver medicine deep within the lungs.

The researchers constructed the hollow nanoparticles from porous silica, which they filled with an antibiotic called ceftazidime. A shell of negatively charged compounds surrounding the nanoparticles blocked off pores, preventing antibiotic leakage. This negative charge also helps the nanoparticles penetrate mucus. Then, the slight acidity of the mucus transforms the shells’ charge from negative to positive, opening up pores and releasing the medication.

The researchers used an inhalable spray containing the nanoparticles to treat a bacterial lung infection in six mice with signs of COPD. An equal number of animals received only the antibiotic.

On average, mice treated with the nanoparticles had about 98 per cent less pathogenic bacteria inside their lungs than those given just the antibiotic. They also had fewer inflammatory molecules in their lungs and lower carbon dioxide in their blood, indicating better lung function.

These findings suggest the nanoparticles could improve drug delivery in people with COPD or other lung conditions like cystic fibrosis where thick mucus makes it difficult to treat infections, says Vincent Rotello at the University of Massachusetts Amherst, who wasn’t involved in the study. However, it is unclear if these nanoparticles are cleared by lungs. “If you have a delivery system that builds up over time, that would be problematic,” he says.

Source: COPD: Inhalable nanoparticles could help treat chronic lung disease | New Scientist

OpenAI latest to add ‘Made by AI’ metadata to model output

Images emitted by OpenAI’s generative models will include metadata disclosing their origin, which in turn can be used by applications to alert people to the machine-made nature of that content.

Specifically, the Microsoft-championed super lab is, as expected, adopting the Content Credentials specification, which was devised by the Coalition for Content Provenance and Authenticity (C2PA), an industry body backed by Adobe, Arm, Microsoft, Intel, and more.

Content Credentials is pretty simple and specified in full here: it uses standard data formats to store within media files details about who made the material and how. This metadata isn’t directly visible to the user and is cryptographically protected so that any unauthorized changes are obvious.

Applications that support this metadata, when they detect it in a file’s contents, are expected to display a little “cr” logo over the content to indicate there is Content Credentials information present in that file. Clicking on that logo should open up a pop-up containing that information, including any disclosures that the stuff was made by AI.

The C2PA mark as applied by OpenAI

How the C2PA ‘cr’ logo might appear on an OpenAI-generated image in a supporting app. Source: OpenAI

The idea being here that it should be immediately obvious to people viewing or editing stuff in supporting applications – from image editors to web browsers, ideally – whether or not the content on screen is AI made.

[…]

the Content Credentials strategy isn’t foolproof as we’ve previously reported. The metadata can be easily stripped out or exported without it, or the “cr” cropped out of screenshots, so no “cr” logo will appear on the material in future in any applications. It also relies on apps and services to support the specification, whether they are creating or displaying media.

To work at scale and gain adoption, it also needs some kind of cloud system that can be used to restore removed metadata, which Adobe happens to be pushing, as well as a marketing campaign to spread brand awareness. Increase its brandwidth, if you will.

[…]

n terms of file-size impact, OpenAI insisted that a 3.1MB PNG file generated by its DALL-E API grows by about three percent (or about 90KB) when including the metadata.

[…]

Source: OpenAI latest to add ‘Made by AI’ metadata to model output • The Register

It’s a decent enough idea, a bit like an artist signing their works. Just hopefully it won’t look so damn ugly as in the example and each AI will have their own little logo.

Deep Abandoned Mine In Finland To Be Turned Into A Giant Gravity Battery

[…]

the idea behind gravity batteries is really simple. During times when energy sources are producing more energy than the demand, the excess energy is used to move weights (in the form of water or sometimes sand) upwards, turning it into potential energy. When the power supply is low, these objects can then be released, powering turbines as our good friend (and deadly enemy) gravity sends them towards the Earth.

 

Though generally gravity batteries take the form of reservoirs, abandoned mines moving sand or other weights up when excess power is being produced have also been suggested. Scottish company Gravitricity created a system of winches and hoists that can be installed in such disused mineshafts. The company will install the system in the 1,400-meter-deep (4,600 feet) zinc and copper mine in Pyhäjärvi, Finland.

[…]

Source: Deep Abandoned Mine In Finland To Be Turned Into A Giant Gravity Battery | IFLScience