The Linkielist

Linking ideas with the world

The Linkielist

Location Data Firm Got GPS Data From Apps Even When People Opted Out

Huq, an established data vendor that obtains granular location information from ordinary apps installed on people’s phones and then sells that data, has been receiving GPS coordinates even when people explicitly opted-out of such collection inside individual Android apps, researchers and Motherboard have found.

The news highlights a stark problem for smartphone users: that they can’t actually be sure if some apps are respecting their explicit preferences around data sharing. The data transfer also presents an issue for the location data companies themselves. Many claim to be collecting data with consent, and by extension, in line with privacy regulations. But Huq was seemingly not aware of the issue when contacted by Motherboard for comment, showing that location data firms harvesting and selling his data may not even know whether they are actually getting this data with consent or not.

“This shows an urgent need for regulatory action,” Joel Reardon, assistant professor at the University of Calgary and the forensics lead and co-founder of AppCensus, a company that analyzes apps, and who first flagged some of the issues around Huq to Motherboard, said in an email. “I feel that there’s plenty wrong with the idea that—as long as you say it in your privacy policy—then it’s fine to do things like track millions of people’s every moment and sell it to private companies to do what they want with it. But how do we even start fixing problems like this when it’s going to happen regardless of whether you agree, regardless of any consent whatsoever.”

[…]

Huq does not publicly say which apps it has relationships with. Earlier this year Motherboard started to investigate Huq by compiling a list of apps that contained code related to the company. Some of the apps have been downloaded millions or tens of millions of times, including “SPEEDCHECK,” an internet speed testing app; “Simple weather & clock widget,” a basic weather app; and “Qibla Compass,” a Muslim prayer app.

Independently, Reardon and AppCensus also examined Huq and later shared some of their findings with Motherboard. Reardon said in an email that he downloaded one app called “Network Signal Info” and found that it still sent location and other data to Huq after he opted-out of the app sharing data with third parties.

[…]

Source: Location Data Firm Got GPS Data From Apps Even When People Opted Out

5-Day Brain Stimulation Treatment Highly Effective Against Depression, Stanford Researchers Find

Stanford researchers think they’ve devised an effective and quick-acting way to treat difficult cases of depression, by improving on an already approved form of brain stimulation. In a new trial published this week, the researchers found that almost 80% of patients improved after going through treatment—a far higher rate than those who were given a sham placebo.

Brain stimulation has emerged as a promising avenue for depression, particularly depression that hasn’t responded to other treatments. The basic concept behind it is to use electrical impulses to balance out the erratic brain activity associated with neurological or psychiatric disorders. There are different forms of stimulation, which vary in intensity and how they interact with the body. Some require permanent implants in the brain, while others can be used noninvasively, like repetitive transcranial magnetic stimulation (rTMS). As the name suggests, rTMS relies on magnetic fields that are temporarily applied to the head.

[…]

the Stanford neuromodulation therapy (SNT), relies on higher-dose magnetic pulses delivered over a quicker, five-day schedule, meant to mimic about seven months of standard rTMS treatment. The treatment is also personalized to each patient, with MRI scans used beforehand to pick out the best possible locations along the brain to deliver these pulses.

[…]

Last year, Williams and his team published a small study of 21 patients who were given SNT, showing that 90% of people severely affected by their depression experienced remission—in other words, that they no longer met the criteria for an acute depressive episode. Moreover, people’s feelings of suicidal ideation went away as well. The study was open label, though, meaning that patients and doctors knew what treatment was being given. Confirming that any drug or treatment actually works requires more rigorous tests, such as a double-blinded and placebo-controlled experiment. And that’s what the team has done now, publishing the results of their new trial in the American Journal of Psychiatry.

[…]

This time, about 78% of patients given genuine SNT experienced remission, based on standard diagnostic tests, compared to about 13% of the sham group. There were no serious side effects, with the most common being a short-lasting headache. And when participants were asked to guess which treatment they took, neither group did better than chance, indicating that the blinding worked.

[…]

Source: 5-Day Brain Stimulation Treatment Highly Effective Against Depression, Stanford Researchers Find

Scientists discover new phase of water, known as “superionic ice,” inside planets

Scientists have discovered a new phase of water — adding to liquid, solid and gas — know as “superionic ice.” The “strange black” ice, as scientists called it, is normally created at the core of planets like Neptune and Uranus.

In a study published in Nature Physics, a team of scientists co-led by Vitali Prakapenka, a University of Chicago research professor, detailed the extreme conditions necessary to produce this kind of ice. It had only been glimpsed once before, when scientists sent a massive shockwave through a droplet of water, creating superionic ice that only existed for an instant.

In this experiment, the research team took a different approach. They pressed water between two diamonds, the hardest material on Earth, to reproduce the intense pressure that exists at the core of planets. Then, they used the Advanced Photon Source, or high-brightness X-ray beams, to shoot a laser through the diamonds to heat the water, according to the study.

“Imagine a cube, a lattice with oxygen atoms at the corners connected by hydrogen when it transforms into this new superionic phase, the lattice expands, allowing the hydrogen atoms to migrate around while the oxygen atoms remain steady in their positions,” Prakapenka said in a press release. “It’s kind of like a solid oxygen lattice sitting in an ocean of floating hydrogen atoms.”

Using an X-ray to look at the results, the team found the ice became less dense and was described as black in color because it interacted differently with light.

“It’s a new state of matter, so it basically acts as a new material, and it may be different from what we thought,” Prakapenka said.

What surprised the scientists the most was that superionic ice was created under a much lighter pressure than they’d originally speculated. They had thought that it would not be created until the water was compressed to over 50 gigapascals of pressure — the same amount of pressure inside rocket fuel as it combusts for lift-off — but it only took 20 gigapascals of pressure.

[…]

Superionic ice doesn’t exist only inside far-away planets — it’s also inside Earth, and it plays a role in maintaining our planet’s magnetic fields. Earth’s intense magnetism protects the planet’s surface from dangerous radiation and cosmic rays that come from outer space.

[…]

Source: Scientists discover new phase of water, known as “superionic ice,” inside planets – CBS News

What Else Do the Leaked ‘Facebook Papers’ Show? Angry face emojis have 5x the weight of a like thumb emoji… and more other stuff

The documents leaked to U.S. regulators by a Facebook whistleblower “reveal that the social media giant has privately and meticulously tracked real-world harms exacerbated by its platforms,” reports the Washington Post.

Yet it also reports that at the same time Facebook “ignored warnings from its employees about the risks of their design decisions and exposed vulnerable communities around the world to a cocktail of dangerous content.”

And in addition, the whistleblower also argued that due to Mark Zuckberg’s “unique degree of control” over Facebook, he’s ultimately personally response for what the Post describes as “a litany of societal harms caused by the company’s relentless pursuit of growth.” Zuckerberg testified last year before Congress that the company removes 94 percent of the hate speech it finds before a human reports it. But in internal documents, researchers estimated that the company was removing less than 5 percent of all hate speech on Facebook…

For all Facebook’s troubles in North America, its problems with hate speech and misinformation are dramatically worse in the developing world. Documents show that Facebook has meticulously studied its approach abroad, and is well aware that weaker moderation in non-English-speaking countries leaves the platform vulnerable to abuse by bad actors and authoritarian regimes. According to one 2020 summary, the vast majority of its efforts against misinformation — 84 percent — went toward the United States, the documents show, with just 16 percent going to the “Rest of World,” including India, France and Italy…

Facebook chooses maximum engagement over user safety. Zuckerberg has said the company does not design its products to persuade people to spend more time on them. But dozens of documents suggest the opposite. The company exhaustively studies potential policy changes for their effects on user engagement and other factors key to corporate profits.

Amid this push for user attention, Facebook abandoned or delayed initiatives to reduce misinformation and radicalization… Starting in 2017, Facebook’s algorithm gave emoji reactions like “angry” five times the weight as “likes,” boosting these posts in its users’ feeds. The theory was simple: Posts that prompted lots of reaction emoji tended to keep users more engaged, and keeping users engaged was the key to Facebook’s business. The company’s data scientists eventually confirmed that “angry” reaction, along with “wow” and “haha,” occurred more frequently on “toxic” content and misinformation. Last year, when Facebook finally set the weight on the angry reaction to zero, users began to get less misinformation, less “disturbing” content and less “graphic violence,” company data scientists found.
The Post also contacted a Facebook spokeswoman for their response. The spokewoman denied that Zuckerberg “makes decisions that cause harm” and then also dismissed the findings as being “based on selected documents that are mischaracterized and devoid of any context…”

Responding to the spread of specific pieces of misinformation on Facebook, the spokeswoman went as far to acknowledge that at Facebook, “We have no commercial or moral incentive to do anything other than give the maximum number of people as much of a positive experience as possible.”

She added that the company is “constantly making difficult decisions.”

Source: What Else Do the Leaked ‘Facebook Papers’ Show? – Slashdot

‘A Mistake by YouTube Shows Its Power Over Media’ – and Kafka-esque arbritration rules

“Every hour, YouTube deletes nearly 2,000 channels,” reports the New York Times. “The deletions are meant to keep out spam, misinformation, financial scams, nudity, hate speech and other material that it says violates its policies.

“But the rules are opaque and sometimes arbitrarily enforced,” they write — and sometimes, YouTube does end up making mistakes. (Alternate URL here…) The gatekeeper role leads to criticism from multiple directions. Many on the right of the political spectrum in the United States and Europe claim that YouTube unfairly blocks them. Some civil society groups say YouTube should do more to stop the spread of illicit content and misinformation… Roughly 500 hours of video are uploaded to YouTube every minute globally in different languages. “It’s impossible to get our minds around what it means to try and govern that kind of volume of content,” said Evelyn Douek, senior research fellow at the Knight First Amendment Institute at Columbia University. “YouTube is a juggernaut, by some metrics as big or bigger than Facebook.”

In its email on Tuesday morning, YouTube said Novara Media [a left-leaning London news group] was guilty of “repeated violations” of YouTube’s community guidelines, without elaborating. Novara’s staff was left guessing what had caused the problem. YouTube typically has a three-strikes policy before deleting a channel. It had penalized Novara only once before… Novara’s last show released before the deletion was about sewage policy, which hardly seemed worthy of YouTube’s attention. One of the organization’s few previous interactions with YouTube was when the video service sent Novara a silver plaque for reaching 100,000 subscribers…

Staff members worried it had been a coordinated campaign by critics of their coverage to file complaints with YouTube, triggering its software to block their channel, a tactic sometimes used by right-wing groups to go after opponents…. An editor, Gary McQuiggin, filled out YouTube’s online appeal form. He then tried using YouTube’s online chat bot, speaking with a woman named “Rose,” who said, “I know this is important,” before the conversation crashed. Angry and frustrated, Novara posted a statement on Twitter and other social media services about the deletion. “We call on YouTube to immediately reinstate our account,” it said. The post drew attention in the British press and from members of Parliament.

Within a few hours, Novara’s channel had been restored. Later, YouTube said Novara had been mistakenly flagged as spam, without providing further detail.
“We work quickly to review all flagged content,” YouTube said in a statement, “but with millions of hours of video uploaded on YouTube every day, on occasion we make the wrong call ”

But Ed Procter, chief executive of the Independent Monitor for the Press, told the Times that it was at least the fifth time that a news outlet had material deleted by YouTube, Facebook or Twitter without warning.

Source: ‘A Mistake by YouTube Shows Its Power Over Media’ – Slashdot

So if you have friends in Parliament you can get YouTube to have a look at unbanning you, but if you only have a few hundred thousand followers you are fucked.

It’s a bit like Amazon, except more people depend on the Amazon marketplace for a living:

At Amazon, Some Brands Get More Protection From Fakes Than Others

Dirty dealing in the $175 billion Amazon Marketplace

Amazon’s Alexa Collects More of Your Data Than Any Other Smart Assistant

Our smart devices are listening. Whether it’s personally identifiable information, location data, voice recordings, or shopping habits, our smart assistants know far more than we realize.

[…]

All five services collect your name, phone number, device location, and IP address; the names and numbers of your contacts; your interaction history; and the apps you use. If you don’t like that information being stored, you probably shouldn’t use a voice assistant.

[…]

data collection

Keep in mind that no voice assistant provider is truly interested in protecting your privacy. For instance, Google Assistant and Cortana maintain a log of your location history and routers, Alexa and Bixby record your purchase history, and Siri tracks who is in your Apple Family.

[…]

If you’re looking to take control of your smart assistant, you can stop Alexa from sending your recordings to Amazon, turn off Google Assistant and Bixby, and manage Siri‘s data collection habits.

Source: Amazon’s Alexa Collects More of Your Data Than Any Other Smart Assistant

Intel open-sources AI-powered tool to spot bugs in code

Intel today open-sourced ControlFlag, a tool that uses machine learning to detect problems in computer code — ideally to reduce the time required to debug apps and software. In tests, the company’s machine programming research team says that ControlFlag has found hundreds of defects in proprietary, “production-quality” software, demonstrating its usefulness.

[…]

ControlFlag, which works with any programming language containing control structures (i.e., blocks of code that specify the flow of control in a program), aims to cut down on debugging work by leveraging unsupervised learning. With unsupervised learning, an algorithm is subjected to “unknown” data for which no previously defined categories or labels exist. The machine learning system — ControlFlag, in this case — must teach itself to classify the data, processing the unlabeled data to learn from its inherent structure.

ControlFlag continually learns from unlabeled source code, “evolving” to make itself better as new data is introduced. While it can’t yet automatically mitigate the programming defects it finds, the tool provides suggestions for potential corrections to developers, according to Gottschlich.

[…]

AI-powered coding tools like ControlFlag, as well as platforms like Tabnine, Ponicode, Snyk, and DeepCode, have the potential to reduce costly interactions between developers, such as Q&A sessions and repetitive code review feedback. IBM and OpenAI are among the many companies investigating the potential of machine learning in the software development space. But studies have shown that AI has a ways to go before it can replace many of the manual tasks that human programmers perform on a regular basis.

Source: Intel open-sources AI-powered tool to spot bugs in code | VentureBeat

Internet Service Providers Collect, Sell Horrifying Amount of Sensitive Data, Government Study Concludes

The new FTC report studied the privacy practices of six unnamed broadband ISPs and their advertising arms, and found that the companies routinely collect an ocean of consumer location, browsing, and behavioral data. They then share this data with dodgy middlemen via elaborate business arrangements that often aren’t adequately disclosed to broadband consumers.

“Even though several of the ISPs promise not to sell consumers personal data, they allow it to be used, transferred, and monetized by others and hide disclosures about such practices in fine print of their privacy policies,” the FTC report said.

The FTC also found that while many ISPs provide consumers tools allowing them to opt out of granular data collection, those tools are cumbersome to use—when they work at all. 

[…]

The agency’s report also found that while ISPs promise to only keep consumer data for as long as needed for “business purposes,” the definition of what constitutes a “business purpose” is extremely broad and varies among broadband providers and wireless carriers.

The report repeatedly cites Motherboard reporting showing how wireless companies have historically sold sensitive consumer location data to dubious third parties, often without user consent. This data has subsequently been abused from everyone from bounty hunters and stalkers to law enforcement and those posing as law enforcement.

The FTC was quick to note that because ISPs have access to the entirety of the data that flows across the internet and your home network, they often have access to even more data than what’s typically collected by large technology companies, ad networks, and app makers.


That includes the behavior of internet of things devices connected to your network, your daily movements, your online browsing history, clickstream data (not only which sites you visit but how much time you linger there), email and search data, race and ethnicity data, DNS records, your cable TV viewing habits, and more.

In some instances ISPs have even developed tracking systems that embed each packet a user sends over the internet with an individual identifier, allowing monitoring of user behavior in granular detail. Wireless carrier Verizon was fined $1.3 million in 2016 for implementing such a system without informing consumers or letting them opt out.

“Unlike traditional ad networks whose tracking consumers can block through browser or mobile device settings, consumers cannot use these tools to stop tracking by these ISPs, which use ‘supercookie’ technology to persistently track users,” the FTC report said.

[…]

Source: Internet Service Providers Collect, Sell Horrifying Amount of Sensitive Data, Government Study Concludes

Researchers design antibodies that destroy old cells, slowing down aging

No one knows why some people age worse than others and develop diseases -such as Alzheimer’s, fibrosis, type 2 diabetes or some types of cancer- associated with this aging process. One explanation for this could be the degree of efficiency of each organism’s response to the damage sustained by its cells during its life, which eventually causes them to age. In relation to this, researchers at the Universitat Oberta de Catalunya (UOC) and the University of Leicester (United Kingdom) have developed a new method to remove old cells from tissues, thus slowing down the aging process.

Specifically, they have designed an antibody that acts as a smart bomb able to recognize specific proteins on the surface of these aged or senescent . It then attaches itself to them and releases a drug that removes them without affecting the rest, thus minimizing any potential side effects.

[…]

“We now have, for the first time, an antibody-based drug that can be used to help slow down in humans,” noted Salvador Macip, the leader of this research and a doctor and researcher at the UOC and the University of Leicester.

“We based this work on existing cancer therapies that target specific proteins present on the surface of cancer cells, and then applied them to senescent cells,” explained the expert.

All have a mechanism known as “cellular senescence” that halts the division of damaged cells and removes them to stop them from reproducing. This mechanism helps slow down the progress of cancer, for example, as well as helping model tissue at the embryo development stage.

However, in spite of being a very beneficial biological mechanism, it contributes to the development of diseases when the organism reaches old age. This seems to be because the immune system is no longer able to efficiently remove these senescent cells, which gradually accumulate in tissues and detrimentally affect their functioning.

[…]

The drug designed by Macip and his team is a second-generation senolytic with high specificity and remote-controlled delivery. They started from the results of a previous study that looked at the “surfaceome,” the proteins on the cell’s surface, to identify those proteins that are only present in senescent cells. “They’re not universal: some are more present than others on each type of aged cell,” said Macip.

In this new work, the researchers used a monoclonal antibody trained to recognize senescent cells and attach to them. “Just like our antibodies recognize germs and protect us from them, we’ve designed these antibodies to recognize old cells. In addition, we’ve given them a toxic load to destroy them, as if they were a remote-controlled missile,” said the researcher, who is the head of the University of Leicester’s Mechanisms of Cancer and Aging Lab.

Treatment could start to be given as soon as the first symptoms of the disease, such as Alzheimer’s, type 2 diabetes, Parkinson’s, arthritis, cataracts or some tumors, appear. In the long term, the researchers believe that it could even be used to achieve healthier aging in some circumstances.

Source: Researchers design antibodies that destroy old cells, slowing down aging

Study: Recycled Lithium Batteries as Good as Newly Mined

[…]

while the EV battery recycling industry is starting to take off, getting carmakers to use recycled materials remains a hard sell. “In general, people’s impression is that recycled material is not as good as virgin material,” says Yan Wang, a professor of mechanical engineering at Worcester Polytechnic Institute. “Battery companies still hesitate to use recycled material in their batteries.”

A new study by Wang and a team including researchers from the US Advanced Battery Consortium (USABC), and battery company A123 Systems, shows that battery and carmakers needn’t worry. The results, published in the journal Joule, shows that batteries with recycled cathodes can be as good as, or even better than those using new state-of-the-art materials.

The team tested batteries with recycled NMC111 cathodes, the most common flavor of cathode containing a third each of nickel, manganese, and cobalt. The cathodes were made using a patented recycling technique that Battery Resourcers, a startup Wang co-founded, is now commercializing.

[…]

The researchers made 11 Ampere-hour industry-standard pouch cells loaded with materials at the same density as EV batteries. Engineers at A123 Systems did most of the testing, Wang says, using a protocol devised by the USABC to meet commercial viability goals for plug-in hybrid electric vehicles. He says the results prove that recycled cathode materials are a viable alternative to pristine materials.

[…]

“We are the only company that gives an output that is a cathode material,” he says. “Other companies make elements. So their value added is less.”

Their technology involves shredding batteries and removing the steel cases, aluminum and copper wires, plastics, and pouch materials for recycling. The remaining black mass is dissolved in solvents, and the graphite, carbon and impurities are filtered out or chemically separated. Using a patented chemical technique, the nickel, manganese and cobalt are then mixed in desired ratios to make cathode powders.

[…]

[…]

Source: Study: Recycled Lithium Batteries as Good as Newly Mined – IEEE Spectrum

Client-Side Scanning Is An Insecure Nightmare Just Waiting To Be Exploited By Governments or companies. Apple basically installing spyware under a nice name.

In August, Apple declared that combating the spread of CSAM (child sexual abuse material) was more important than protecting millions of users who’ve never used their devices to store or share illegal material. While encryption would still protect users’ data and communications (in transit and at rest), Apple had given itself permission to inspect data residing on people’s devices before allowing it to be sent to others.

This is not a backdoor in a traditional sense. But it can be exploited just like an encryption backdoor if government agencies want access to devices’ contents or mandate companies like Apple do more to halt the spread of other content governments have declared troublesome or illegal.

Apple may have implemented its client-side scanning carefully after weighing the pros and cons of introducing a security flaw, but there’s simply no way to engage in this sort of scanning without creating a very large and slippery slope capable of accommodating plenty of unwanted (and unwarranted) government intercession.

Apple has put this program on hold for the time being, citing concerns raised by pretty much everyone who knows anything about client-side scanning and encryption. The conclusions that prompted Apple to step away from the precipice of this slope (at least momentarily) have been compiled in a report [PDF] on the negative side effects of client-side scanning, written by a large group of cybersecurity and encryption experts

[…]

Only policy decisions prevent the scanning expanding from illegal abuse images to other material of interest to governments; and only the lack of a software update prevents the scanning expanding from static images to content stored in other formats, such as voice, text, or video.

And if people don’t think governments will demand more than Apple’s proactive CSAM efforts, they haven’t been paying attention. CSAM is only the beginning of the list of content governments would like to see tech companies target and control.

While the Five Eyes governments and Apple have been talking about child sex-abuse material (CSAM) —specifically images— in their push for CSS, the European Union has included terrorism and organized crime along with sex abuse. In the EU’s view, targeted content extends from still images through videos to text, as text can be used for both sexual solicitation and terrorist recruitment. We cannot talk merely of “illegal” content, because proposed UK laws would require the blocking online of speech that is legal but that some actors find upsetting.

Once capabilities are built, reasons will be found to make use of them. Once there are mechanisms to perform on-device censorship at scale, court orders may require blocking of nonconsensual intimate imagery, also known as revenge porn. Then copyright owners may bring suit to block allegedly infringing material.

That’s just the policy and law side. And that’s only a very brief overview of clearly foreseeable expansions of CSS to cover other content, which also brings with it concerns about it being used as a tool for government censorship. Apple has already made concessions to notoriously censorial governments like China’s in order to continue to sell products and services there.

[…]

CSS is at odds with the least-privilege principle. Even if it runs in middleware, its scope depends on multiple parties in the targeting chain, so it cannot be claimed to use least-privilege in terms of the scanning scope. If the CSS system is a component used by many apps, then this also violates the least-privilege principle in terms of scope. If it runs at the OS level, things are worse still, as it can completely compromise any user’s device, accessing all their data, performing live intercept, and even turning the device into a room bug.

CSS has difficulty meeting the open-design principle, particularly when the CSS is for CSAM, which has secrecy requirements for the targeted content. As a result, it is not possible to publicly establish what the system actually does, or to be sure that fixes done in response to attacks are comprehensive. Even a meaningful audit must trust that the targeted content is what it purports to be, and so cannot completely test the system and all its failure modes.

Finally, CSS breaks the psychological-acceptability principle by introducing a spy in the owner’s private digital space. A tool that they thought was theirs alone, an intimate device to guard and curate their private life, is suddenly doing surveillance on behalf of the police. At the very least, this takes the chilling effect of surveillance and brings it directly to the owner’s fingertips and very thoughts.

[…]

Despite this comprehensive report warning against the implementation of client-side scanning, there’s a chance Apple may still roll its version out. And once it does, the pressure will be on other companies to do at least as much as Apple is doing to combat CSAM.

Source: Report: Client-Side Scanning Is An Insecure Nightmare Just Waiting To Be Exploited By Governments | Techdirt

CSAM is like installing listening software on a device. Once anyone has access to install whatever they like, there is nothing stopping them from listening in to everything. Despite the technically interesting name CSAM basically it’s talking about the manufacturer installing spyware on your device.

‘Flight Simulator: GOTY Edition’ adds new aircraft and locations on November 18th

Microsoft is spicing up Flight Simulator with an expanded re-release, although this one may be more ambitious than some. It’s releasing Flight Simulator: Game of the Year Edition on November 18th with both a heaping of new content as well as some meaningful feature upgrades. To start, there are five new stand-out aircraft, including the F/A-18 Super Hornet — you won’t have to wait until the Top Gun expansion to buzz the tower in a fighter jet. You’ll also get to fly the VoloCity air taxi, PC-6 Porter short-takeoff aircraft, the bush flying-oriented NX Cub and the single-seat Aviat Pitts Special S1S.

The GOTY upgrade adds eight airports, including Marine Corps Air Station Miramar and Patrick Space Force Base. Eight cities will get photogrammetry detail upgrades, such as Helsinki, Nottinghm and Utrecht. There are accordingly new tutorials (such as bush flying and IFR) and Discovery Flights.

The update adds useful features, too. You’ll have early access to DirectX 12 features, an improved weather system and a developer mode replay system, among other improvements.

Most notably, you won’t have to pay for any of this as a veteran player— existing Flight Simulator owners will receive a free update on both Windows PCs and Xbox Series X/S. The paid GOTY release exists chiefly to entice first-timers. For everyone else, this is billed as a “thank you” upgrade that could keep them coming back.

Source: ‘Flight Simulator: GOTY Edition’ adds new aircraft and locations on November 18th | Engadget

Facial recognition scheme in place in some British schools – more to come

Facial recognition technology is being employed in more UK schools to allow pupils to pay for their meals, according to reports today.

In North Ayrshire Council, a Scottish authority encompassing the Isle of Arran, nine schools are set to begin processing meal payments for school lunches using facial scanning technology.

The authority and the company implementing the technology, CRB Cunninghams, claim the system will help reduce queues and is less likely to spread COVID-19 than card payments and fingerprint scanners, according to the Financial Times.

Speaking to the publication, David Swanston, the MD of supplier CRB Cunninghams, said the cameras verify the child’s identity against “encrypted faceprint templates”, and will be held on servers on-site at the 65 schools that have so far signed up.

[…]

North Ayrshire council said 97 per cent of parents had given their consent for the new system, although some said they were unsure whether their children had been given enough information to make their decision.

Seemingly unaware of the controversy surrounding facial recognition, education solutions provider CRB Cunninghams announced its introduction of the technology in schools in June as the “next step in cashless catering.”

[…]

Privacy campaigners voiced concerns that moving the technology into schools merely for payment was needlessly normalising facial recognition.

“No child should have to go through border style identity checks just to get a school meal,” Silkie Carlo of the campaign group Big Brother Watch told The Reg.

“We are supposed to live in a democracy, not a security state. This is highly sensitive, personal data that children should be taught to protect, not to give away on a whim. This biometrics company has refused to disclose who else children’s personal information could be shared with and there are some red flags here for us. “Facial recognition technology typically suffers from inaccuracy, particularly for females and people of colour, and we’re extremely concerned about how this invasive and discriminatory system will impact children.”

[…]

Those concerned about the security of schools systems now storing children’s biometric data will not be assured by the fact that educational establishments have become targets for cyber-attacks.

In March, the Harris Federation, a not-for-profit charity responsible for running 50 primary and secondary academies in London and Essex, became the latest UK education body to fall victim to ransomware. The institution said it was “at least” the fourth multi-academy trust targeted just that month alone. Meanwhile, South and City College Birmingham earlier this year told 13,000 students that all lectures would be delivered via the web because a ransomware attack had disabled its core IT systems.

[…]

Source: Facial recognition scheme in place in some British schools • The Register

The students probably gave their consent because if they didn’t, they wouldn’t get any lunch. The problem with biometrics is that they don’t change. So if someone steals yours, then it’s stolen forever. It’s not a password you can reset.

Hacker steals government ID database for Argentina’s entire population

A hacker has breached the Argentinian government’s IT network and stolen ID card details for the country’s entire population, data that is now being sold in private circles.

The hack, which took place last month, targeted RENAPER, which stands for Registro Nacional de las Personas, translated as National Registry of Persons.

The agency is a crucial cog inside the Argentinian Interior Ministry, where it is tasked with issuing national ID cards to all citizens, data that it also stores in digital format as a database accessible to other government agencies, acting as a backbone for most government queries for citizen’s personal information.

Lionel Messi and Sergio Aguero data leaked on Twitter

The first evidence that someone breached RENAPER surfaced earlier this month on Twitter when a newly registered account named @AnibalLeaks published ID card photos and personal details for 44 Argentinian celebrities.

This included details for the country’s president Alberto Fernández, multiple journalists and political figures, and even data for soccer superstars Lionel Messi and Sergio Aguero.

A day after the images and personal details were published on Twitter, the hacker also posted an ad on a well-known hacking forum, offering to look up the personal details of any Argentinian user.

Argentina-DB
Image: The Record

Faced with a media fallback following the Twitter leaks, the Argentinian government confirmed a security breach three days later.

In an October 13 press release, the Ministry of Interior said its security team discovered that a VPN account assigned to the Ministry of Health was used to query the RENAPER database for 19 photos “in the exact moment in which they were published on the social network Twitter.”

Officials added that “the [RENAPER] database did not suffer any data breach or leak,” and authorities are now currently investigating eight government employees about having a possible role in the leak.

Hacker has a copy of the data, plans to sell and leak it

However, The Record contacted the individual who was renting access to the RENAPER database on hacking forums.

In a conversation earlier today, the hacker said they have a copy of the RENAPER data, contradicting the government’s official statement.

The individual proved their statement by providing the personal details, including the highly sensitive Trámite number, of an Argentinian citizen of our choosing.

[…]

Source: Hacker steals government ID database for Argentina’s entire population – The Record by Recorded Future

Yet again we see how centralised databases are such a good idea. And if countries are so terrible at protecting extremely sensitive data, how do you think weakening protections by allowing countries master key type access to encrypted data is going to make anything better for anyone?

Cybercrime Group Has Hacked Telecoms All Over the World since at least 2016

[…]A hacker gang, […] has been infiltrating telecoms throughout the world to steal phone records, text messages, and associated metadata directly from carrier users.

That’s according to a new report from cybersecurity firm CrowdStrike, which published a technical analysis of the mysterious group’s hacking campaign on Tuesday. The report, which goes into a significant amount of detail, shows that the hackers behind the campaign have managed to infiltrate 13 different global telecoms in the span of just two years.

Researchers say that the group, which has been active since 2016, uses highly sophisticated hacking techniques and customized malware to infiltrate and embed within networks. Reuters reports that this has included exfiltrating “calling records and text messages” directly from carriers. Earlier research on the group suggests it has also been known to target managed service providers as an entry point into specific industries—such as finance and consulting.[…]

Source: Cybercrime Group Has Been Hacked Telecoms All Over the World

Facebook fined GBP 50m by UK for not supplying correct info on giphy takeover

The UK’s Competition and Markets Authority (CMA) has smacked Facebook with a £50m ($68.7m) fine for “deliberately” not giving it the full picture about its ongoing $400m acquisition of gif-slinger Giphy.

The move  – fingered by the CMA as a “major breach” – comes just weeks after the antisocial network dismissed the UK’s regulator’s initial findings as being based on “fundamental errors” and just hours after the US Dept of Justice and its Department of Labor announced separate agreements with the firm in which it will fork over $14.25m to settle allegations of discriminatory hiring practices.

Facebook first announced its intention to buy the image platform, which hosts a searchable database of short looping soundless animated GIFs – many of which are sourced from reality TV and films – in May last year. Giphy also hosts MP4 looped video clips (so users can “enjoy” audio), which it also unaccountably calls gifs. Pinterest, Reddit and Salesforce’s comms firm Slack have all integrated Giphy into their platforms so you can “react” to friends and colleagues. Facebook’s acquisition values the company at $400m.

[…]

Bamford said companies were not required to seek the CMA’s approval before they completed an acquisition but noted that “if they decide to go ahead with a merger, we can stop the companies from integrating further if we think consumers might be affected and an investigation is needed.”

He added: “We warned Facebook that its refusal to provide us with important information was a breach of the order but, even after losing its appeal in two separate courts, Facebook continued to disregard its legal obligations.

“This should serve as a warning to any company that thinks it is above the law.”

[…]

Source: Facebook fined by UK competition body • The Register

Why does dutch supermarket Albert Heijn have camera’s looking at you at the self check out?

The Party for the Animals (PvdD) wants clarity from outgoing minister Dekker for Legal Protection about a camera on Albert Heijn’s self-scanner. It concerns the PS20 from manufacturer Zebra. According to this company, the camera on the self-scanner supports facial recognition to automatically identify customers. PvdD MPs Van Raan and Wassenberg want to know whether facial recognition is used in Albert Heijn stores in any way. The minister must also explain what legal basis Albert Heijn or other supermarket chains can rely on if they decide to use facial recognition. Finally, the PvdD MPs want to know what Minister Dekker can do to prevent supermarkets from using facial recognition now or in the future.

Source: PvdD wil opheldering over camera op zelfscanner van Albert Heijn – Emerce

Canon Sued for Disabling All-in-One Printer When Ink Runs Out

A customer fed up with the tyranny of home printers is suing Canon for disabling multiple functions on an all-in-one printer when it runs out of ink.

Consumer printer makers have long used the razor blade business model—so named after companies who sell razor handles for cheap, but the compatible replacement blades at much higher prices.

[…]

The advent of devices like smartphones and even social media have made sharing photos digitally much easier, which means consumers are printing photos less and less. That has had an effect on the profitability of home printers

[…]

Leacraft, who is named as the plaintiff in a class-action complaint against Canon filed in a U.S. federal court in New York last week, found that their Canon Pixma MG6320 all-in-one printer would no longer scan or fax documents when it was out of ink, despite neither of those functions requiring any printing at all. According to Bleeping Computer, it’s an issue that dates back to at least 2016 when other customers reported the same problem to Canon through the company’s online forums, and were told by the company’s support people that all the ink cartridges must be installed and contain ink to use all of the printer’s features.

[…]

The complaint points out that Canon promotes its all-in-one printers as having multiple distinct features, including printing, copying, scanning, and sometimes even faxing, but without any warnings that those features are dependent on sufficient levels of ink being available.

[…]

Source: Canon Sued for Disabling All-in-One Printer When Ink Runs Out

At Amazon, Some Brands Get More Protection From Fakes Than Others

There are two classes of merchant on Amazon.com: those who get special protection from counterfeiters and those who don’t. From a report: The first category includes sellers of some big-name brands, such as Adidas, Apple and even Amazon itself. They benefit from digital fortifications that prevent unauthorized sellers from listing certain products — an iPhone, say, or eero router — for sale. Many lesser-known brands belong to the second group and have no such shield. Fred Ruckel, inventor of a popular cat toy called the Ripple Rug, is one of those sellers. A few months ago, knockoff artists began selling versions of his product, siphoning off tens of thousands of dollars in sales and forcing him to spend weeks trying have the interlopers booted off the site.

Amazon’s marketplace has long been plagued with fakes, a scourge that has made household names like Nike leery of putting their products there. While most items can be uploaded freely to the site, Amazon by 2016 had begun requiring would-be sellers of a select group of products to get permission to list them. The company doesn’t publicize the program, but in the merchant community it has become known as “brand gating.” Of the millions of products sold on Amazon, perhaps thousands are afforded this kind of protection, people who advise sellers say. Most merchants, many of them small businesses, rely on Amazon’s algorithms to ferret out fakes before they appear — an automated process that dedicated scammers have managed to evade.

Source: At Amazon, Some Brands Get More Protection From Fakes Than Others – Slashdot

WhatsApp begins rolling out end-to-end encryption for chat backups

The wait is over. It’s now possible to encrypt your WhatsApp chat history on both Android and iOS, Facebook CEO Mark Zuckerberg announced on Thursday. The company plans to roll out the feature slowly to ensure it can deliver a consistent and reliable experience to all users.

However, once you can access the feature, it will allow you to secure your backups before they hit iCloud or Google Drive. At that point, neither WhatsApp nor your cloud service provider will be able to access the files. It’s also worth mentioning you won’t be able to recover your backups if you ever lose the 64-digit encryption key that secures your chat logs. That said, it’s also possible to secure your backups behind a password, in which case you can recover that if you ever lose it.

While WhatsApp has allowed users to securely message each other since 2016, it only started testing encrypted backups earlier this year. With today’s announcement, the company said it has taken the final step toward providing a full end-to-end encrypted messaging experience.

It’s worth pointing out that end-to-end encryption doesn’t guarantee your privacy will be fully protected. According to a report The Information published in August, Facebook was looking into an AI that could analyze encrypted data without having to decrypt it so that it could serve ads based on that information. The head of WhatsApp denied the report, but it’s a reminder that there’s more to privacy than merely the existence of end-to-end encryption.

Source: WhatsApp begins rolling out end-to-end encryption for chat backups | Engadget

Moscow metro launches facial recognition payment system despite privacy concerns

More than 240 metro stations across Moscow now allow passengers to pay for a ride by looking at a camera. The Moscow metro has launched what authorities say is the first mass-scale deployment of a facial recognition payment system. According to The Guardian, passengers can access the payment option called FacePay by linking their photo, bank card and metro card to the system via the Mosmetro app. “Now all passengers will be able to pay for travel without taking out their phone, Troika or bank card,” Moscow mayor Sergey Sobyanin tweeted.

In the official Moscow website’s announcement, the country’s Department of Transport said all Face Pay information will be encrypted. The cameras at the designated turnstyles will read a passenger’s biometric key only, and authorities said information collected for the system will be stored in data centers that can only be accessed by interior ministry staff. Moscow’s Department of Information Technology has also assured users that photographs submitted to the system won’t be handed over to the cops.

Still, privacy advocates are concerned over the growing use of facial recognition in the city. Back in 2017, officials added facial recognition tech to the city’s 170,000 security cameras as part of its efforts to ID criminals on the street. Activists filed a case against Moscow’s Department of Technology a few years later in hopes of convincing the courts to ban the use of the technology. However, a court in Moscow sided with the city, deciding that its use of facial recognition does not violate the privacy of citizens. Reuters reported earlier this year, though, that those cameras were also used to identify protesters who attended rallies.

Stanislav Shakirov, the founder of Roskomsvoboda, a group that aims to protect Russians’ digital rights, said in a statement:

“We are moving closer to authoritarian countries like China that have mastered facial technology. The Moscow metro is a government institution and all the data can end up in the hands of the security services.”

Meanwhile, the European Parliament called on lawmakers in the EU earlier this month to ban automated facial recognition in public spaces. It cited evidence that facial recognition AI can still misidentify PoCs, members of the LGBTI+ community, seniors and women at higher rates. In the US, local governments are banning the use of the technology in public spaces, including statewide bans by Massachusetts and Maine. Four Democratic lawmakers also proposed a bill to ban the federal government from using facial recognition.

Source: Moscow metro launches facial recognition payment system despite privacy concerns | Engadget

Of course one of the huge problems with biometrics is that you can’t change them. Once you are compromised, you can’t go and change the password.

New crew docks at China’s first permanent space station

Chinese astronauts began Saturday their six-month mission on China’s first permanent space station, after successfully docking aboard their spacecraft.

The astronauts, two men and a woman, were seen floating around the module before speaking via a live-streamed video.

[…]

The space travelers’ Shenzhou-13 spacecraft was launched by a Long March-2F rocket at 12:23 a.m. Saturday and docked with the Tianhe core module of the space station at 6:56 a.m.

The three astronauts entered the station’s core module at about 10 a.m., the China Manned Space Agency said.

They are the second crew to move into China’s Tiangong space station, which was launched last April. The first crew stayed three months.

[…]

The crew will do three spacewalks to install equipment in preparation for expanding the station, assess living conditions in the Tianhe module, and conduct experiments in space medicine and other fields.

China’s military-run plans to send multiple crews to the station over the next two years to make it fully functional.

When completed with the addition of two more sections—named Mengtian and Wentian—the station will weigh about 66 tons, much smaller than the International Space Station, which launched its first module in 1998 and weighs around 450 tons.

[…]

Source: New crew docks at China’s first permanent space station

Missouri governor demands prosecution for data breach report – in HTML source code of state website

A Missouri politician has been relentlessly mocked on Twitter after demanding the prosecution of a journalist who found and responsibly reported a vulnerability in a state website.

Mike Parson, governor of Missouri, described reporters for local newspaper the St Louis Post Dispatch (SLPD) as “hackers” after they discovered a web app for the state’s Department of Elementary and Secondary Education was leaking teachers’ private information.

Around 100,000 social security numbers were able to be exposed when the web app was loaded in a user’s browser. The public-facing app was intended to be used by local schools to check teachers’ professional registration status. So users could tell between different teachers of the same name, it would accept the last four digits of a teacher’s social security number as a valid search string.

It appears that in the background, the app was retrieving the entire social security number and exposing it to the end user.

The SLPD discovered this by viewing a search results page’s source code. “View source” has been a common feature of web browsers for years, typically available by right-clicking anywhere on a webpage and selecting it from a menu.

SLPD reporters told the Missouri Department of Education about the flaw and held off publicising it so officials could fix it – but that wasn’t good enough for the governor.

“The state is committed to bring to justice anyone who hacked our system and anyone who aided and abetted them to do so,” Parson said, according to the Missouri Independent news website. He justified his bizarre outburst by saying the SLPD was “attempting to embarrass the state and sell headlines for their news outlet.”

[…]

Source: Missouri governor demands prosecution for data breach report • The Register

Tesla’s Bringing Car Insurance to Texas W/ New ‘Safety Score’ by eating and selling your location data

After two years of offering car insurance to drivers across California, Tesla’s officially bringing a similar offering to clientele in its new home state of Texas. As Electrek first reported, the big difference between the two is how drivers’ premiums are calculated: in California, the prices were largely determined by statistical evaluations. In Texas, your insurance costs will be calculated in real-time, based on your driving behavior.

Tesla says it grades this behavior using the “Safety Score” feature—the in-house metric designed by the company in order to estimate a driver’s chance of future collision. These scores were recently rolled out in order to screen drivers that were interested in testing out Tesla’s “Full Self Driving” software, which, like the Safety Score itself, is currently in beta. And while the self-driving software release date is, um, kind of up in the air for now, Tesla drivers in the lone-star state can use their safety score to apply for quotes on Tesla’s website as of today.

As Tesla points out in its own documents, relying on a single score makes the company a bit of an outlier in the car insurance market. Most traditional insurers round up a driver’s costs based on a number of factors that are wholly unrelated to their actual driving: depending on the state, this can include age, gender, occupation, and credit score, all playing a part in defining how much a person’s insurance might cost.

Tesla, on the other hand, relies on a single score, which the company says get tallied up based on five different factors: the number of forward-collision warnings you get every 1,000 miles, the number of times you “hard brake,” how often you take too-fast turns, how closely you drive behind other drivers, and how often they take their hands off the wheel when Autopilot is engaged.

[…]

Source: Tesla’s Bringing Car Insurance to Texas W/ New ‘Safety Score’

The idea sounds reasonable – but giving Tesla my location data and allowing them to process and sell that doesn’t.

Researchers show Facebook’s ad tools can target a single specific user

A new research paper written by a team of academics and computer scientists from Spain and Austria has demonstrated that it’s possible to use Facebook’s targeting tools to deliver an ad exclusively to a single individual if you know enough about the interests Facebook’s platform assigns them.

The paper — entitled “Unique on Facebook: Formulation and Evidence of (Nano)targeting Individual Users with non-PII Data” — describes a “data-driven model” that defines a metric showing the probability a Facebook user can be uniquely identified based on interests attached to them by the ad platform.

The researchers demonstrate that they were able to use Facebook’s Ads manager tool to target a number of ads in such a way that each ad only reached a single, intended Facebook user.

[…]

Source: Researchers show Facebook’s ad tools can target a single user | TechCrunch