Millions of mobile phones come pre-infected with malware

Miscreants have infected millions of Androids worldwide with malicious firmware before the devices even shipped from their factories, according to Trend Micro researchers at Black Hat Asia.

This hardware is mainly cheapo Android mobile devices, though smartwatches, TVs, and other things are caught up in it.

The gadgets have their manufacturing outsourced to an original equipment manufacturer (OEM). That outsourcing makes it possible for someone in the manufacturing pipeline – such as a firmware supplier – to infect products with malicious code as they ship out, the researchers said.

This has been going on for a while, we think; for example, we wrote about a similar headache in 2017. The Trend Micro folks characterized the threat today as “a growing problem for regular users and enterprises.” So, consider this a reminder and a heads-up all in one.

[…]

This insertion of malware began as the price of mobile phone firmware dropped, we’re told. Competition between firmware distributors became so furious that eventually the providers could not charge money for their product.

“But of course there’s no free stuff,” said Yarochkin, who explained that, as a result of this cut-throat situation, firmware started to come with an undesirable feature – silent plugins. The team analyzed dozens of firmware images looking for malicious software. They found over 80 different plugins, although many of those were not widely distributed.

The plugins that were the most impactful were those that had a business model built around them, were sold on the underground, and marketed in the open on places like Facebook, blogs, and YouTube.

The objective of the malware is to steal info or make money from information collected or delivered.

The malware turns the devices into proxies which are used to steal and sell SMS messages, take over social media and online messaging accounts, and used as monetization opportunities via adverts and click fraud.

One type of plugin, proxy plugins, allow the criminal to rent out devices for up to around five minutes at a time. For example, those renting the control of the device could acquire data on keystrokes, geographical location, IP address and more.

[…]

Through telemetry data, the researchers estimated that at least millions of infected devices exist globally, but are centralized in Southeast Asia and Eastern Europe. A statistic self-reported by the criminals themselves, said the researchers, was around 8.9 million.

As for where the threats are coming from, the duo wouldn’t say specifically, although the word “China” showed up multiple times in the presentation, including in an origin story related to the development of the dodgy firmware. Yarochkin said the audience should consider where most of the world’s OEMs are located and make their own deductions.

“Even though we possibly might know the people who build the infrastructure for this business, its difficult to pinpoint how exactly the this infection gets put into this mobile phone because we don’t know for sure at what moment it got into the supply chain,“ said Yarochkin.

The team confirmed the malware was found in the phones of at least 10 vendors, but that there was possibly around 40 more affected. For those seeking to avoid infected mobile phones, they could go some way of protecting themselves by going high end.

[…]

“Big brands like Samsung, like Google took care of their supply chain security relatively well, but for threat actors, this is still a very lucrative market,” said Yarochkin.

Source: Millions of mobile phones come pre-infected with malware • The Register

Black hat presentation: Behind the Scenes: How Criminal Enterprises Pre-infect Millions of Mobile Devices

HP disables customers’ printers if they use ink cartridges from cheaper rivals

Hewlett-Packard, or HP, has sparked fury after issuing a recent “firmware” update which blocks customers from using cheaper, non-HP ink cartridges in its printers.

Customers’ devices were remotely updated in line with new terms which mean their printers will not work unless they are fitted with approved ink cartridges.

It prevents customers from using any cartridges other than those fitted with an HP chip, which are often more expensive. If the customer tries to use a non-HP ink cartridge, the printer will refuse to print.

HP printers used to display a warning when a “third-party” ink cartridge was inserted, but now printers will simply refuse to print altogether.

[…]

This is not the first time HP has angered its customers by blocking the use of other ink cartridges.

The firm has been forced to pay out millions in compensation to customers in America, Australia and across Europe since it first introduced dynamic security measures back in 2016.

Just last year the company paid $1.35m (£1m) to consumers in Belgium, Italy, Spain and Portugal who had bought printers not knowing they were equipped with the cartridge-blocking feature.

Last year consumer advocates called on the Competition and Markets Authority to investigate whether branded ink costs and “dynamic security” measures were fair to consumers, after finding that lesser-known brands of ink cartridges offered better value for money than major names.

The consumer group Which? said manufacturers were “actively blocking customers from exerting their right to choose the cheapest ink and therefore get a better deal”.

[…]

Source: HP disables customers’ printers if they use ink cartridges from cheaper rivals

That’s because the printer is not what they are selling you, it’s the stupidly overpriced ink. So no, you don’t own what you bought, they are saying.

European Media Freedom Act is a free pass to spread fake news, directly goes against DSA

“Disinformation is a threat to our democracies” is a statement with which virtually every political group in the European Parliament agrees. Many political statements have been made on the subject calling for more to be done to counter disinformation, especially since the Russian attack on Ukraine.

As part of that effort, the EU recently adopted the Digital Services Act (DSA), the legislation that many hope will provide the necessary regulatory framework to – at least partially – tackle the disinformation challenge. Unfortunately, there is a danger we might end up not seeing the positive results that the DSA promises to bring.

There are attempts to undermine the DSA with exemptions for media content in the European Media Freedom Act (EMFA), currently on the EU legislators’ table. This contains a measure which would effectively reverse the DSA provisions and prevent platforms like Twitter and Facebook from moderating content coming from anyone claiming to be a ‘media’. A very bad idea that was already, after much debate, rejected in the DSA.

Let’s see how this would work in practice. If any self-declared media writes that “The European Parliament partners with Bill Gates and George Soros to insert 5G surveillance chips into vaccines”, and this article is published on Twitter or Facebook, for instance, the platforms will first have to contact the media. They would then wait for 24 or 48 hours before possibly adding a fact-check, or not being able to do it all if some of the most recent amendments go through.

Those who have encountered such disinformation know that the first 24 hours are critical. As the old adage goes, “A lie gets halfway around the world before the truth puts on its boots”. Enabling such a back-and-forth exchange will only benefit the spread of disinformation, which can be further amplified by algorithms and become almost unstoppable.

Many journalists and fact-checkers have complained in the past that platforms were not doing enough to reduce the visibility of such viral disinformation. The Commission itself mentions that “Global online platforms act as gateways to media content, with business models that tend to disintermediate access to media services and amplify polarising content and disinformation.” Why on Earth would the EU then encourage further polarisation and disinformation by preventing content moderation?

This is not only a question of how such a carveout would benefit bogus media outlets. Some mainstream news sources with solid reputations and visibility can make mistakes, or are often the prime targets of those running disinformation campaigns. And quite successfully, as the recent example from the acclaimed investigations by Forbidden Stories has shown. In Hungary and Poland, state media that disseminate propaganda, in some cases even pro-Russian narratives, would be exempted from content moderation as well.

It might be counterintuitive, but the role of the media in disinformation and influence operations is huge. EU DisInfoLab sees it virtually in every single investigation that we do.

This loophole in the EMFA will make it hard if not impossible for the Commission to enforce the DSA against the biggest platforms. Potentially we would have to wait for the Court of Justice to solve the conflict between the two laws: the DSA mandating platforms to do content moderation and the EMFA legally preventing them from doing it. This would not be a good look for the EU legislature and until a decision of the Court comes, what will platforms do? They will likely stop moderating anything that comes close to being a ‘media’ just to avoid difficulties and costs.

We really don’t need any media exemption. There is no evidence to suggest that media content over-moderation is a systemic issue, and the impact assessment by the Commission does not suggest that either. With the DSA, Europe has just adopted horizontal content moderation rules where media freedom and plurality are at the core. Surely we should rather give a chance for the DSA to work, instead of saying it already failed before it is even applicable.

Media exemption will not help media freedom and plurality, on the contrary. It will enable industrial-scale disinformation production, reduce visibility for reputable media and the trust of society in it even more. Last year, Maria Ressa and Dmitry Muratov, 2021 Nobel Peace Prize laureates and journalists, called on the EU to ensure that no media exemption be included in any tech or media legislation in their 10-point plan to address our information crisis. It was supported by more than 100 civil society organisations.

MEPs and member states working on the EMFA must see the risks of disinformation and other harmful content that any carveout for media would create. The decision they are facing is clear: either flood Europe with harmful content or prioritise the safety of online users by strongly enforcing horizontal content moderation rules in the DSA.

Source: European Media Freedom Act: No to any media exemption

Google introduces PaLM 2 large language model

[…]

Building on this work, today we’re introducing PaLM 2, our next generation language model. PaLM 2 is a state-of-the-art language model with improved multilingual, reasoning and coding capabilities.

  • Multilinguality: PaLM 2 is more heavily trained on multilingual text, spanning more than 100 languages. This has significantly improved its ability to understand, generate and translate nuanced text — including idioms, poems and riddles — across a wide variety of languages, a hard problem to solve. PaLM 2 also passes advanced language proficiency exams at the “mastery” level.
  • Reasoning: PaLM 2’s wide-ranging dataset includes scientific papers and web pages that contain mathematical expressions. As a result, it demonstrates improved capabilities in logic, common sense reasoning, and mathematics.
  • Coding: PaLM 2 was pre-trained on a large quantity of publicly available source code datasets. This means that it excels at popular programming languages like Python and JavaScript, but can also generate specialized code in languages like Prolog, Fortran and Verilog.

A versatile family of models

Even as PaLM 2 is more capable, it’s also faster and more efficient than previous models — and it comes in a variety of sizes, which makes it easy to deploy for a wide range of use cases.

[…]

PaLM 2 shows us the impact of highly capable models of various sizes and speeds — and that versatile AI models reap real benefits for everyone

[…]

We’re already at work on Gemini — our next model created from the ground up to be multimodal, highly efficient at tool and API integrations, and built to enable future innovations, like memory and planning.

[…]

Source: Google AI: What to know about the PaLM 2 large language model

YouTube begins warning: ‘Ad blockers are not allowed’

YouTube has begun showing a pop-up to some viewers warning them that “ad blockers are not allowed” on the video-sharing site.

The banner, which you can see below, appears if the Google subsidiary reckons you’re using some kind of content blocker that prevents videos from being interrupted by or book-ended with adverts.

According to YouTube, this is an experiment and only a small number of watchers will see the pop-up when browsing YouTube.com. The box tells users, “it looks like you may be using an ad blocker,” and reminds them that “ads allow YouTube to stay free for billions of users worldwide.”

It also urges you to “go ad-free with YouTube Premium, and creators can still get paid from your subscription.”

There are two options presented: a button to “allow YouTube ads,” and a button to sign up for YouTube Premium, an ad-free subscription that costs $11.99 a month at least here in the United States.

Those who have seen the pop-up say they can ignore those options, and close the pop-up and continue blocking ads as usual – though for how long, who’s to say? There is a link to click if you’re not using an blocker and want to report a false detection.

Screenshot of Youtube's ad blocker warning

What the YouTube ad block warning looks like … Hat tip: Reddit

“One ad before each video was fine, but they got greedy and started playing multiple unskippable 30-second ads, that’s when I went for ad block,” as one viewer put it. “There is zero chance I am ever deactivating it or paying for Premium now, that ship has sailed.”

[…]

Source: YouTube begins warning: ‘Ad blockers are not allowed’ • The Register

Scientists discover microbes in the Alps and Arctic that can digest plastic at low temperatures

Finding, cultivating, and bioengineering organisms that can digest plastic not only aids in the removal of pollution, but is now also big business. Several microorganisms that can do this have already been found, but when their enzymes that make this possible are applied at an industrial scale, they typically only work at temperatures above 30°C.

 

The heating required means that industrial applications remain costly to date, and aren’t carbon-neutral. But there is a possible solution to this problem: finding specialist cold-adapted microbes whose enzymes work at lower temperatures.

Scientists from the Swiss Federal Institute WSL knew where to look for such microorganisms: at high altitudes in the Alps of their country, or in the polar regions. Their findings are published in Frontiers in Microbiology.

“Here we show that novel microbial taxa obtained from the ‘plastisphere’ of alpine and arctic soils were able to break down at 15°C,” said first author Dr. Joel Rüthi, currently a guest scientist at WSL. “These organisms could help to reduce the costs and environmental burden of an enzymatic recycling process for .”

[…]

None of the strains were able to digest PE, even after 126 days of incubation on these plastics. But 19 (56%) of strains, including 11 fungi and eight bacteria, were able to digest PUR at 15°C, while 14 fungi and three bacteria were able to digest the plastic mixtures of PBAT and PLA. Nuclear Magnetic Resonance (NMR) and a fluorescence-based assay confirmed that these strains were able to chop up the PBAT and PLA polymers into smaller molecules.

[…]

The best performers were two uncharacterized fungal species in the genera Neodevriesia and Lachnellula: these were able to digest all of the tested plastics except PE.

[…]

The best performers were two uncharacterized fungal species in the genera Neodevriesia and Lachnellula: these were able to digest all of the tested plastics except PE.

Source: Scientists discover microbes in the Alps and Arctic that can digest plastic at low temperatures

OpenAI attempts to use Language models can explain neurons in language models, open source

[…]

One simple approach to interpretability research is to first understand what the individual components (neurons and attention heads) are doing. This has traditionally required humans to manually inspect neurons to figure out what features of the data they represent. This process doesn’t scale well: it’s hard to apply it to neural networks with tens or hundreds of billions of parameters. We propose an automated process that uses GPT-4 to produce and score natural language explanations of neuron behavior and apply it to neurons in another language model.

This work is part of the third pillar of our approach to alignment research: we want to automate the alignment research work itself. A promising aspect of this approach is that it scales with the pace of AI development. As future models become increasingly intelligent and helpful as assistants, we will find better explanations.

How it works

Our methodology consists of running 3 steps on every neuron.

[…]

Step 1: Generate explanation using GPT-4

Given a GPT-2 neuron, generate an explanation of its behavior by showing relevant text sequences and activations to GPT-4.

[…]

Step 2: Simulate using GPT-4

Simulate what a neuron that fired for the explanation would do, again using GPT-4

[…]

Step 3: Compare

Score the explanation based on how well the simulated activations match the real activations

[…]

What we found

Using our scoring methodology, we can start to measure how well our techniques work for different parts of the network and try to improve the technique for parts that are currently poorly explained. For example, our technique works poorly for larger models, possibly because later layers are harder to explain.

1e+51e+61e+71e+81e+90.020.030.040.050.060.070.080.090.100.110.12

Parameters in model being interpretedExplanation scoreScores by size of the model being interpreted

Although the vast majority of our explanations score poorly, we believe we can now use ML techniques to further improve our ability to produce explanations. For example, we found we were able to improve scores by:

  • Iterating on explanations. We can increase scores by asking GPT-4 to come up with possible counterexamples, then revising explanations in light of their activations.
  • Using larger models to give explanations. The average score goes up as the explainer model’s capabilities increase. However, even GPT-4 gives worse explanations than humans, suggesting room for improvement.
  • Changing the architecture of the explained model. Training models with different activation functions improved explanation scores.

We are open-sourcing our datasets and visualization tools for GPT-4-written explanations of all 307,200 neurons in GPT-2, as well as code for explanation and scoring using publicly available models on the OpenAI API. We hope the research community will develop new techniques for generating higher-scoring explanations and better tools for exploring GPT-2 using explanations.

We found over 1,000 neurons with explanations that scored at least 0.8, meaning that according to GPT-4 they account for most of the neuron’s top-activating behavior. Most of these well-explained neurons are not very interesting. However, we also found many interesting neurons that GPT-4 didn’t understand. We hope as explanations improve we may be able to rapidly uncover interesting qualitative understanding of model computations.

Source: Language models can explain neurons in language models

Coqui.ai Text to Speech library – create your own voice

🐸TTS is a library for advanced Text-to-Speech generation. It’s built on the latest research, was designed to achieve the best trade-off among ease-of-training, speed and quality. 🐸TTS comes with pretrained models, tools for measuring dataset quality and already used in 20+ languages for products and research projects.

Github page: https://github.com/coqui-ai/TTS

Ed Sheeran, Once Again, Demonstrates How Modern Copyright Is Destroying, Rather Than Helping Musicians

To hear the recording industry tell the story, copyright is the only thing protecting musicians from poverty and despair. Of course, that’s always been a myth. Copyright was designed to benefit the middlemen and gatekeepers, such as the record labels, over the artists themselves. That’s why the labels have a long history of never paying artists.

But over the last few years, Ed Sheeran has been highlighting the ways in which (beyond the “who gets paid” aspect of all of this) modern copyright is stifling rather than incentivizing music creation — directly in contrast to what we’re told it’s supposed to be doing.

We’ve talked about Sheeran before, as he’s been sued repeatedly by people claiming that his songs sound too much like other songs. Sheeran has always taken a much more open approach to copyright and music, noting that kids pirating his music is how he became famous in the first place. He’s also stood up for kids who had accounts shut down via copyright claims for playing his music.

But the lawsuits have been where he’s really highlighted the absurdity of modern copyright law. After winning one of the lawsuits a year ago, he put out a heartfelt statement on how ridiculous the whole thing was. A key part:

There’s only so many notes and very few chords used in pop music. Coincidence is bound to happen if 60,000 songs are being released every day on Spotify—that’s 22 million songs a year—and there’s only 12 notes that are available.

In the aftermath of this, Sheeran has said that he’s now filming all of his recent songwriting sessions, just in case he needs to provide evidence that he and his songwriting partners came up with a song on their own, which is depressing in its own right.

[…]

with this latest lawsuit it wasn’t actually a songwriter suing. It was a private equity firm that had purchased the rights from one of the songwriters (not Marvin Gaye) of Marvin Gaye’s hit song “Let’s Get it On.”

The claim over Thinking Out Loud was originally lodged in 2018, not by Gaye’s family but by investment banker David Pullman and a company called Structured Asset Sales, which has acquired a portion of the estate of Let’s Get It On co-writer Ed Townsend.

Thankfully, Sheeran won the case as the jury sided with him over Structured Asset Sales. Sheeran, once again, used the attention to highlight just how broken copyright is if these lawsuits are what’s coming out of it:

“I’m obviously very happy with the outcome of the case, and it looks like I’m not having to retire from my day job after all. But at the same time I’m unbelievably frustrated that baseless claims like this are able to go to court.

“We’ve spent the last eight years talking about two songs with dramatically different lyrics, melodies, and four chords which are also different, and used by songwriters every day all over the world. These chords are common building blocks used long before Let’s Get it On was written, and will be used to make music long after we’re all gone.

“They are in a songwriters’ alphabet, our toolkit, and should be there for all of us to use. No one owns them or the way that they are played, in the same way that no one owns the color blue.”

[…]

Source: Ed Sheeran, Once Again, Demonstrates How Modern Copyright Is Destroying, Rather Than Helping Musicians | Techdirt

Microsoft Tests Sticking Ads in Windows 11 Settings Menu as well as start menu

[…]

In addition to ads in the Start menu, the latest test build for Windows 11 includes notices for a Microsoft 365 trial and more in the Settings menu.

On Friday, Windows beta user and routine leaker Albacore shared several screenshots of the latest Insider Preview build 23451. These shots come from the ultra-early Canary test build, and show a new “Home” tab in Settings that includes a notice to “Try Microsoft 365.” This appears to link to a free trial of the company’s office apps suite. There’s also a notice for OneDrive and another to ask users to finish setting up a Microsoft account, advertising users can use the 365 apps and its cloud storage on desktop. Another notice in the Accounts tab also blasts users with a request to sign in to their Microsoft account.

These ads are very similar to other preview builds with so-called “badging” that shows up when users click on the Start menu. In that menu, the ads are more subtle and ask users to “Sign in to your Microsoft account” or advertise to users that they can “Use Microsoft 365 for free,” of course ignoring that users have to input their credit card information to access their free month of office apps.

[…]

Source: Microsoft Tests Sticking Ads in Windows 11 Settings Menu

Mercedes Locks Better EV Engine Performance Your Car Has and you paid for Behind Subscription

Last year BMW took ample heat for its plans to turn heated seats into a costly $18 per month subscription in numerous countries. As we noted at the time, BMW is already including the hardware in new cars and adjusting the sale price accordingly. So it’s effectively charging users a new, recurring fee to enable technology that already exists in the car and consumers already paid for.

The move portends a rather idiotic and expensive future for consumers that’s arriving faster than you’d think. Consumers unsurprisingly aren’t too keen on paying an added subscription for tech that already exists in the car and was already factored into the retail price, but the lure of consistent additional revenue they can nudge ever skyward pleases automakers and Wall Street alike.

Mercedes had already been toying with this idea in its traditional gas vehicles, but now says it’s considering making better EV engine performance an added subscription surcharge:

Mercedes-Benz electric vehicle owners in North America who want a little more power and speed can now buy 60 horsepower for just $60 a month or, on other models, 80 horsepower for $90 a month.

They won’t have to visit a Mercedes dealer to get the upgrade either, or even leave their own driveway. The added power, which will provide a nearly one second decrease in zero-to-60 acceleration, will be available through an over-the-air software patch.

Again, this is simply creating artificial restrictions and then charging consumers extra to bypass them. But this being America, there will indisputably be no shortage of dumb people with disposable income willing to burn money as part of a misguided craving for status.

If you don’t want to pay monthly, Mercedes will also let you pay a one time flat fee (usually several thousand dollars) to remove the artificial restrictions they’ve imposed on your engine. That’s, of course, creating additional upward pricing funnel efforts on top of the industry’s existing efforts to upsell you on a rotating crop of trims, tiers, and options you probably didn’t want.

It’s not really clear that regulators have any interest in cracking down on charging dumb people extra for something they already owned and paid for. After all, ripping off gullible consumers is effectively now considered little more than creative marketing by a notable segment of government “leaders” (see: regulatory apathy over misleading hidden fees in everything from hotels to cable TV).

[…]

Source: Mercedes Locks Better EV Engine Performance Behind Annoying Subscription Paywalls | Techdirt

So you pay for something which is in YOUR car but you can’t use it until you pay… more!

Yet another problem with recycling: It spews microplastics

[…]

an alarming new study has found that even when plastic makes it to a recycling center, it can still end up splintering into smaller bits that contaminate the air and water. This pilot study focused on a single new facility where plastics are sorted, shredded, and melted down into pellets. Along the way, the plastic is washed several times, sloughing off microplastic particles—fragments smaller than 5 millimeters—into the plant’s wastewater.

Because there were multiple washes, the researchers could sample the water at four separate points along the production line. (They are not disclosing the identity of the facility’s operator, who cooperated with their project.) This plant was actually in the process of installing filters that could snag particles larger than 50 microns (a micron is a millionth of a meter), so the team was able to calculate the microplastic concentrations in raw versus filtered discharge water—basically a before-and-after snapshot of how effective filtration is.

Their microplastics tally was astronomical. Even with filtering, they calculate that the total discharge from the different washes could produce up to 75 billion particles per cubic meter of wastewater. Depending on the recycling facility, that liquid would ultimately get flushed into city water systems or the environment. In other words, recyclers trying to solve the plastics crisis may in fact be accidentally exacerbating the microplastics crisis, which is coating every corner of the environment with synthetic particles.

[…]

The good news here is that filtration makes a difference: Without it, the researchers calculated that this single recycling facility could emit up to 6.5 million pounds of microplastic per year. Filtration got it down to an estimated 3 million pounds. “So it definitely was making a big impact when they installed the filtration,” says Brown. “We found particularly high removal efficiency of particles over 40 microns.”

[…]

Depending on the recycling facility, that wastewater might next flow to a sewer system and eventually to a treatment plant that is not equipped to filter out such small particles before pumping the water into the environment. But, says Enck, “some of these facilities might be discharging directly into groundwater. They’re not always connected to the public sewer system.” That means the plastics could end up in the water people use for drinking or irrigating crops.

The full extent of the problem isn’t yet clear, as this pilot study observed just one facility. But because it was brand-new, it was probably a best-case scenario, says Steve Allen, a microplastics researcher at the Ocean Frontiers Institute and coauthor of the new paper. “It is a state-of-the-art plant, so it doesn’t get any better,” he says. “If this is this bad, what are the others like?”

[…]

Still, researchers like Brown don’t think that we should abandon recycling. This new research shows that while filters can’t stop all the microplastics from leaving a recycling facility, they at least help substantially. “I really don’t want it to suggest to people that we shouldn’t recycle, and to give it a completely negative reputation,” she says. “What it really highlights is that we just really need to consider the impacts of the solutions.”

Scientists and anti-pollution groups agree that the ultimate solution isn’t relying on recycling or trying to pull trash out of the ocean, but massively cutting plastic production. “​​I just think this illustrates that plastics recycling in its traditional form has some pretty serious problems,” says Enck. “This is yet another reason to do everything humanly possible to avoid purchasing plastics.”

Source: Yet another problem with recycling: It spews microplastics | Ars Technica

Finnish newspaper hides Ukraine news reports for Russians in online game

A Finnish newspaper is circumventing Russian media restrictions by hiding news reports about the war in Ukraine in an online game popular among Russian gamers.

“While Helsingin Sanomat and other foreign independent media are blocked in Russia, online games have not been banned so far,” said Antero Mukka, the editor-in-chief of Helsingin Sanomat.

The newspaper was bypassing Russia’s censorship through the first-person shooter game Counter-Strike, where gamers battle against each other as terrorists and counter-terrorists in timed matches.

While the majority of matches are played on about a dozen official levels or maps released by the publisher Valve, players can also create custom maps that anyone can download and use.

The newspaper’s initiative was unveiled on World Press Freedom Day on Wednesday.

“To underline press freedom, [in the game] we have now built a Slavic city, called Voyna, meaning war in Russian,” Mukka said.

In the basement of one of the apartment buildings that make up the Soviet-inspired cityscape, Helsingin Sanomat hid a room where players can find Russian-language reporting by the newspaper’s war correspondents in Ukraine.

“In the room, you will find our documentation of what the reality of the war in Ukraine is,” Mukka said.

The walls of the digital room, lit up by red lights, are plastered with news articles and pictures reporting on events such as the massacres in the Ukrainian towns of Bucha and Irpin.

On one of the walls, players can find a map of Ukraine that details reported attacks on the civilian population, while a Russian-language recording reading Helsingin Sanomat articles aloud plays in the background.

This was “information that is not available from Russian state propaganda sources”, Mukka said.

Since its release on Monday, the map has been downloaded more than 2,000 times, although the paper cannot currently track downloads geographically.

“This definitely underlines the fact that every attempt to obstruct the flow of information and blind the eyes of the public is doomed to fail in today’s world,” Mukka said.

He said an estimated 4 million Russians played the game. “These people may often be in the mobilisation or drafting age.”

“I think Russians also have the right to know independent and fact-based information, so that they can also make their own life decisions,” he added.

Source: Finnish newspaper hides Ukraine news reports for Russians in online game | Censorship | The Guardian

Microsoft is forcing Outlook and Teams to open links in Edge, ignore OS default browser settings

Microsoft Edge is a good browser but for some reason Microsoft keeps trying to shove it down everyone’s throat and make it more difficult to use rivals like Chrome or Firefox. Microsoft has now started notifying IT admins that it will force Outlook and Teams to ignore the default web browser on Windows and open links in Microsoft Edge instead.

Reddit users have posted messages from the Microsoft 365 admin center that reveal how Microsoft is going to roll out this change. “Web links from Azure Active Directory (AAD) accounts and Microsoft (MSA) accounts in the Outlook for Windows app will open in Microsoft Edge in a single view showing the opened link side-by-side with the email it came from,” reads a message to IT admins from Microsoft.

While this won’t affect the default browser setting in Windows, it’s yet another part of Microsoft 365 and Windows that totally ignores your default browser choice for links. Microsoft already does this with the Widgets system in Windows 11 and even the search experience, where you’ll be forced into Edge if you click a link even if you have another browser set as default.

IT admins aren’t happy with many complaining in various threads on Reddit, spotted by Neowin. If Outlook wasn’t enough, Microsoft says “a similar experience will arrive in Teams” soon with web links from chats opening in Microsoft Edge side-by-side with Teams chats.

[…]

The notifications to IT admins come just weeks after Microsoft promised significant changes to the way Windows manages which apps open certain files or links by default. At the time Microsoft said it believed “we have a responsibility to ensure user choices are respected” and that it’s “important that we lead by example with our own first party Microsoft products.” Forcing people into Microsoft Edge and ignoring default browsers is anything but respecting user choice, and it’s gross that Microsoft continues to abuse this.

Microsoft tested a similar change to the default Windows 10 Mail app in 2018, in an attempt to force people into Edge for email links. That never came to pass, thanks to a backlash from Windows 10 testers. A similar change in 2020 saw Microsoft try and force Chrome’s default search engine to Bing using the Office 365 installer, and IT admins weren’t happy then either.

[…]

Source: Microsoft is forcing Outlook and Teams to open links in Edge, and IT admins are angry – The Verge

Researchers See Through a Mouse’s Eyes by Decoding Brain Signals

[…] a team of researchers from the École Polytechnique Fédérale de Lausanne (EPFL) successfully developed a machine-learning algorithm that can decode a mouse’s brain signals and reproduce images of what it’s seeing.

[…]

The mice were shown a black and white movie clip from the 1960s of a man running to a car and then opening its trunk. While the mice were watching the clip, scientists measured and recorded their brain activity using two approaches: electrode probes inserted into their brains’ visual cortex region, as well as optical probes for mice that had been genetically engineered so that the neurons in their brains glow green when firing and transmitting information. That data was then used to train a new machine learning algorithm called CEBRA.

See through the eyes of a mouse by decoding brain signals

When then applied to the captured brain signals of a new mouse watching the black and white movie clip for the first time, the CEBRA algorithm was able to correctly identify specific frames the mouse was seeing as it watched. Because CEBRA was also trained on that clip, it was also able to generate matching frames that were a near perfect match, but with the occasional telltale distortions of AI-generated imagery.

[…]

This research involved a very specific (and short) piece of footage that the machine learning algorithm was also familiar with. In its current form, CEBRA also really only takes into account the activity from about 1% of the neurons in a mouse’s brain, so there’s definitely room for its accuracy and capabilities to improve. The research also isn’t just about decoding what a brain sees. A study, published in the journal, Nature, shows that CEBRA can also be used to “predict the movements of the arm in primates,” and “reconstruct the positions of rats as they freely run around an arena.” It’s a potentially far more accurate way to peer into the brain, and understand how all the neural activity correlates to what is being processed.

Source: Researchers See Through a Mouse’s Eyes by Decoding Brain Signals

Liquid Neural Networks use Neuron design to compute effectively using smaller models

[Ramin Hasani] and colleague [Mathias Lechner] have been working with a new type of Artificial Neural Network called Liquid Neural Networks, and presented some of the exciting results at a recent TEDxMIT.

Liquid neural networks are inspired by biological neurons to implement algorithms that remain adaptable even after training. [Hasani] demonstrates a machine vision system that steers a car to perform lane keeping with the use of a liquid neural network. The system performs quite well using only 19 neurons, which is profoundly fewer than the typically large model intelligence systems we’ve come to expect. Furthermore, an attention map helps us visualize that the system seems to attend to particular aspects of the visual field quite similar to a human driver’s behavior.

Mathias Lechner and Ramin Hasani
[Mathias Lechner] and [Ramin Hasani]

The typical scaling law of neural networks suggests that accuracy is improved with larger models, which is to say, more neurons. Liquid neural networks may break this law to show that scale is not the whole story. A smaller model can be computed more efficiently. Also, a compact model can improve accountability since decision activity is more readily located within the network. Surprisingly though, liquid neural network performance can also improve generalization, robustness, and fairness.

A liquid neural network can implement synaptic weights using nonlinear probabilities instead of simple scalar values. The synaptic connections and response times can adapt based on sensory inputs to more flexibly react to perturbations in the natural environment.

We should probably expect to see the operational gap between biological neural networks and artificial neural networks continue to close and blur. We’ve previously presented on wetware examples of building neural networks with actual neurons and ever advancing brain-computer interfaces.

Source: Liquid Neural Networks Do More With Less | Hackaday

You can find the paper here: Drones navigate unseen environments with liquid neural networks

How AI Bots Code: Comparing Bing, Claude+, Co-Pilot, GPT-4 and Bard

[…]

In this article, we will compare four of the most advanced AI bots: GPT-4, Bing, Claude+, Bard, and GitHub Co-Pilot. We will examine how they work, their strengths and weaknesses, and how they compare to each other.

Testing the AI Bots for Coding

Before we dive into comparing these four AI bots, it’s essential to understand what an AI bot for coding is and how it works. An AI bot for coding is an artificial intelligence program that can automatically generate code for a specific task. These bots use natural language processing and machine learning algorithms to analyze human-written code and generate new code based on that analysis.

To start off we are going to test the AI on a hard Leetcode question, after all, we want to be able to solve complex coding problems. We also wanted to test it on a less well-known question. For our experiment, we will be testing Leetcode 214. Shortest Palindrome.

[…]

GPT-4 is highly versatile in generating code for various programming languages and applications. Some of the caveats are that it takes much longer to get a response. API usage is also a lot more expensive and costs could ramp up quickly. Overall it got the answer right and passed the test.

[…]

[Bing] The submission passed all the tests. It beat 47% of submissions on runtime and 37% on memory. This code looks a lot simpler than what GPT-4 generated. It beat GPT-4 on memory and it used less code! Bing seems to have the most efficient code so far, however, it gave a very short explanation of how it solved it. Nonetheless, best so far.

[…]

[Claude+] The code does not pass the submission test. Only 1/121 of the test passed. Ouch! This one seemed promising but it looks like Claude is not that well suited for programming.

[…]

[Bard] So to start off I had to manually insert the “self” arg in the function since Bard didn’t include it. From the result of the test, Bard’s code did not pass the submission test. Passing only 2/121 test cases. An unfortunate result, but it’s safe to say for now Bard isn’t much of a coding expert.

[…]

[Github CodePilot] This passes all the tests. It scored better than 30% of submissions on runtime and 37% on memory.

It’s fun, you can see the coding examples (with and without comments) that were output by each AI in the link

Source: How AI Bots Code: Comparing Bing, Claude+, Co-Pilot, GPT-4 and Bard | HackerNoon

OpenAI Threatens Popular GitHub Project With Lawsuit Over API Use

Anyone can use ChatGPT for free, but if you want to use GPT4, the latest language model, you have to either pay for ChatGPT Plus, pay for access to OpenAI’s API, or find another site that has incorporated GPT4 into its own free chatbot. There are sites that use OpenAI such as Forefront (opens in new tab) and You.com (opens in new tab), but what if you want to make your own bot and don’t want to pay for the API?

A GitHub project called GPT4free (opens in new tab) allows you to get free access to the GPT4 and GPT3.5 models by funneling those queries through sites like You.com (opens in new tab), Quora (opens in new tab) and CoCalc (opens in new tab) and giving you back the answers. The project is GitHub’s most popular new repo, getting 14,000 stars this week.

Now, according to Xtekky, the European computer science student who runs the repo, OpenAI has sent a letter demanding that he take the whole thing down within five days or face a lawsuit.

I interviewed Xtekky via Telegram, and he said he doesn’t think OpenAI should be targeting him since he isn’t connecting directly to the company’s API, but is instead getting data from other sites that are paying for their own API licenses. If the owners of those sites have a problem with his scripts querying them, they should approach him directly, he posited.

[…]

On the backend, GPT4Free is visiting various API urls that sites like You.com, an AI-powered search engine that employs OpenAI’s GPT3.5 model for its answers, use for their own queries. For example, the main GPT4Free script hits the URL https://you.com/api/streamingSearch, feeds it various parameters, and then takes the JSON it returns and formats it. The GPT4Free repo also has scripts that grab data from other sites such as Quora, Forefront, and TheB. Any enterprising developer could use these simple scripts to make their own bot.

“One could achieve the same [thing by] just opening tabs of the sites. I can open tabs of Phind, You, etc. on my browser and spam requests,” Xtekky said. “My repo just does it in a simpler way.”

All of the sites GPT4Free draws from are paying OpenAI fees in order to use its large language models. So when you use the scripts, those sites end up footing the bill for your queries, without you ever visiting them. If those sites are relying on ad revenue from their sites to offset these API costs, they are losing money because of these queries.

Xtekky said that he is more than happy to take down scripts that use individual sites’ APIs upon request from the owners of those sites. He said that he has already taken down scripts that use phind.com, ora.sh and writesonic.com.

Perhaps more importantly, Xtekky noted, any of these sites could block external uses of their internal APIs with common security measures. One of many methods that sites like You.com could use is to block API traffic from any IPs that are not their own.

Xtekky said that he has advised all the sites that wrote to him that they should secure their APIs, but none of them has done so. So, even if he takes the scripts down from his repo, any other developer could do the same thing.

[…]

Xtekky initially told me that he hadn’t decided whether to take the repo down or not. However, several hours after this story first published, we chatted again and he told me that he plans to keep the repo up and to tell OpenAI that, if they want it taken down, they should file a formal request with GitHub instead of with him.

“I believe they contacted me before to pressurize me into deleting the repo myself,” he said. “But the right way should be an actual official DMCA, through GitHub.”

Even if the original repo is taken down, there’s a great chance that the code — and this method of accessing GPT4 and GPT3.5 — will be published elsewhere by members of the community. Even if GPT4Free had never existed anyone can find ways to use these sites’ APIs if they continue to be unsecured.

“Users are sharing and hosting this project everywhere,” he said. “Deletion of my repo will be insignificant.”

[…]

Source: OpenAI Threatens Popular GitHub Project With Lawsuit Over API Use | Tom’s Hardware