Google Will Make It a bit Easier to Turn Off Smart Features which track you, Slightly Harder for Regulators to Break Up Google

Soon, Google will present you with a clear choice to disable smart features, like Google assistant reminders to pay your bills and predictive text in Gmail. Whether you like the Gmail mindreader function that autofills “all the best” and “reaching out,” or have long dreaded the arrival of the machine staring back from the void,: it’s your world, Google’s just living in it. According to Google.

We’ve always been able to disable these functions if we bothered hunting through account settings. But “in the coming weeks” Google will show a new blanket setting to “turn off smart features” which will disable features like Smart Compose, Smart Reply, in apps like Gmail; the second half of the same prompt will disable whether additional Google products—like Maps or Assistant, for example—are allowed to be personalized based on data from Gmail, Meet, and Chat.

Google writes in its blog post about the new-ish settings that humans are not looking at your emails to enable smart features, and Google ads are “not based on your personal data in Gmail,” something CEO Sundar Pichai has likewise said time and again. Google claims to have stopped that practice in 2017, although the following year the Wall Street Journal reported that third-party app developers had freely perused inboxes with little oversight. (When asked whether this is still a problem, the spokesperson pointed us to Google’s 2018 effort to tighten security.)

A Google spokesperson emphasized that the company only uses email contents for security purposes like filtering spam and phishing attempts.

These personalization changes aren’t so much about tightening security as they are another informed consent defense which Google can use to repel the current regulatory siege being waged against it by lawmakers. It has expanded incognito mode for maps and auto-deleting data in location history or web and app activity and on YouTube (though after a period of a few months).

Inquiries in the U.S. and EU have found that Google’s privacy settings have historically presented the appearance of privacy, rather than privacy itself. After a 2018 AP article exposed the extent of Google’s location data harvesting, an investigation found that turning location off in Android was no guarantee that Google wouldn’t collect location data (though Google has denied this.) Plaintiffs in a $5 billion class-action lawsuit filed this summer alleged that “incognito mode” in Chrome didn’t prevent Google from capturing and sharing their browsing history. And last year, French regulators fined Google nearly $57 million for violating the General Data Protection Regulation (GDPR) by allegedly burying privacy controls beneath five or six layers of settings. (When asked, the spokesperson said Google has no additional comment on these cases.)

So this is nice, and also Google’s announcement reads as a letter to regulators. “This new setting is designed to reduce the work of understanding and managing [a choice over how data is processed], in view of what we’ve learned from user experience research and regulators’ emphasis on comprehensible, actionable user choices over data.”

Source: Google Will Make It Easier to Turn Off Smart Features

Apple hits back at European activist lawsuit against unauthorised tracking installs – says it doesn’t use it… but 3rd parties do

The group, led by campaigner Max Schrems, filed complaints with data protection watchdogs in Germany and Spain alleging that the tracking tool illegally enabled the $2 trillion U.S. tech giant to store users’ data without their consent.

Apple directly rebutted the claims filed by Noyb, the digital rights group founded by Schrems, saying they were “factually inaccurate and we look forward to making that clear to privacy regulators should they examine the complaint”.

Schrems is a prominent figure in Europe’s digital rights movement that has resisted intrusive data-gathering by Silicon Valley’s tech platforms. He has fought two cases against Facebook, winning landmark judgments that forced the social network to change how it handles user data.

Noyb’s complaints were brought against Apple’s use of a tracking code, known as the Identifier for Advertisers (IDFA), that is automatically generated on every iPhone when it is set up.

The code, stored on the device, makes it possible to track a user’s online behaviour and consumption preferences – vital in allowing companies to send targeted adverts.

“Apple places codes that are comparable to a cookie in its phones without any consent by the user. This is a clear breach of European Union privacy laws,” Noyb lawyer Stefano Rossetti said.

Rossetti referred to the EU’s e-Privacy Directive, which requires a user’s consent before installation and using such information.

Apple said in response that it “does not access or use the IDFA on a user’s device for any purpose”.

It said its aim was to protect the privacy of its users and that the latest release of its iOS 14 operating system gave users greater control over whether apps could link with third parties for the purposes of targeted advertising.

Source: Apple hits back at European activist complaints against tracking tool | Reuters

The complaint against Apple is that the IDFA is set at all without consent from the user. And it’s not the point that Apple accesses it or not, the point is that unspecified 3rd parties (advertisers, hackers, government, etc) can.

How the U.S. Military Buys Location Data from Ordinary Apps

The U.S. military is buying the granular movement data of people around the world, harvested from innocuous-seeming apps, Motherboard has learned. The most popular app among a group Motherboard analyzed connected to this sort of data sale is a Muslim prayer and Quran app that has more than 98 million downloads worldwide. Others include a Muslim dating app, a popular Craigslist app, an app for following storms, and a “level” app that can be used to help, for example, install shelves in a bedroom.

Through public records, interviews with developers, and technical analysis, Motherboard uncovered two separate, parallel data streams that the U.S. military uses, or has used, to obtain location data. One relies on a company called Babel Street, which creates a product called Locate X. U.S. Special Operations Command (USSOCOM), a branch of the military tasked with counterterrorism, counterinsurgency, and special reconnaissance, bought access to Locate X to assist on overseas special forces operations. The other stream is through a company called X-Mode, which obtains location data directly from apps, then sells that data to contractors, and by extension, the military.

The news highlights the opaque location data industry and the fact that the U.S. military, which has infamously used other location data to target drone strikes, is purchasing access to sensitive data. Many of the users of apps involved in the data supply chain are Muslim, which is notable considering that the United States has waged a decades-long war on predominantly Muslim terror groups in the Middle East, and has killed hundreds of thousands of civilians during its military operations in Pakistan, Afghanistan, and Iraq. Motherboard does not know of any specific operations in which this type of app-based location data has been used by the U.S. military.

[…]

In March, tech publication Protocol first reported that U.S. law enforcement agencies such as Customs and Border Protection (CBP) and Immigration and Customs Enforcement (ICE) were using Locate X. Motherboard then obtained an internal Secret Service document confirming the agency’s use of the technology. Some government agencies, including CBP and the Internal Revenue Service (IRS), have also purchased access to location data from another vendor called Venntel.

“In my opinion, it is practically certain that foreign entities will try to leverage (and are almost certainly actively exploiting) similar sources of private platform user data. I think it would be naïve to assume otherwise,” Mark Tallman, assistant professor at the Department of Emergency Management and Homeland Security at the Massachusetts Maritime Academy, told Motherboard in an email.

THE SUPPLY CHAIN

Some companies obtain app location data through bidstream data, which is information gathered from the real-time bidding that occurs when advertisers pay to insert their adverts into peoples’ browsing sessions. Firms also often acquire the data from software development kits (SDKs).

[…]

In a recent interview with CNN, X-Mode CEO Joshua Anton said the company tracks 25 million devices inside the United States every month, and 40 million elsewhere, including in the European Union, Latin America, and the Asia-Pacific region. X-Mode previously told Motherboard that its SDK is embedded in around 400 apps.

In October the Australian Competition & Consumer Commission published a report about data transfers by smartphone apps. A section of that report included the endpoint—the URL some apps use—to send location data back to X-Mode. Developers of the Guardian app, which is designed to protect users from the transfer of location data, also published the endpoint. Motherboard then used that endpoint to discover which specific apps were sending location data to the broker.

Motherboard used network analysis software to observe both the Android and iOS versions of the Muslim Pro app sending granular location data to the X-Mode endpoint multiple times. Will Strafach, an iOS researcher and founder of Guardian, said he also saw the iOS version of Muslim Pro sending location data to X-Mode.

The data transfer also included the name of the wifi network the phone was currently collected to, a timestamp, and information about the phone such as its model, according to Motherboard’s tests.

[…]

 

Source: How the U.S. Military Buys Location Data from Ordinary Apps

Bumble Left Daters’ Location Data Up For Grabs For Over Six Months

Bumble, the dating app behemoth that’s allegedly headed to a major IPO as soon as next year, apparently took over half a year to deal with major security flaws that left sensitive information its millions of users vulnerable.

That’s according to new research posted over the weekend by cybersecurity firm Independent Security Evaluators (ISE) detailing how a bad actor—even one that was banned from Bumble—could exploit a vulnerability in the app’s underlying code to pull the rough location data for any Bumbler within their city, as well as additional profile data like photos and religious views. Despite being informed about this vulnerability in mid-March, the company didn’t patch the issues until November 12—roughly six and a half months later.

Pre-patch, anyone with a Bumble account could query the app’s API in order to figure out roughly how many miles away any other user in their city happened to be. As the blog’s author, Sanjana Sarda, explained, if a certain creepy someone really wanted to figure out the location of a given Bumble user, it wouldn’t be too hard to set up a handful of accounts, figure out the user’s basic distance from each one, and use that collection of data to triangulate a Bumbler’s precise location.

Bumble isn’t the first company to accidentally leave this sort of data freely available. Last year, cybersecurity sleuths were able to create to glean precise locations of people using LGBT-centric dating apps like Grindr and Romeo and collate them into a user location map. And those location-data leaks are on top of the deliberate data sharing these sorts of dating apps typically already engage in with a bevy third-party partners. You would think that an app purporting to be a feminist haven like Bumble might extend its idea of user safety to its data practices.

While some of the issues described by Sarda have been resolved, the belated patch apparently didn’t tackle one of the other major API-based issues described in the blog, which allowed ISE to get unlimited swipes (or “votes” in Bumble parlance), along with access to other premium features like the ability to unswipe or to see who might have swiped right on them. Typically, accessing these features cost a given Bumbler roughly $10 dollars per week.

Source: Bumble Left Daters’ Location Data Up For Grabs For Over Six Months

GitHub Restores YouTube Downloader Following DMCA Takedown, starts to protect developers from DMCA misuse

Last month, GitHub removed a popular tool that is used to download videos from websites like YouTube after it received a DMCA takedown notice from the Recording Industry Association of America. For a moment, it seemed that GitHub might throw developers under the bus in the same fashion that Twitch has recently treated its streamers. But on Monday, GitHub went on the offense by reinstating the offending tool and saying it would take a more aggressive line on protecting developers’ projects.

Youtube-dl is a command-line program that could, hypothetically, be used to make unauthorized copies of copyrighted material. This potential for abuse prompted the RIAA to send GitHub a scary takedown notice because that’s what the RIAA does all day. The software development platform complied with the notice and unleashed a user outcry over the loss of one of the most popular repositories on the site. Many developers started re-uploading the code to GitHub in protest. After taking some time to review the case, GitHub now says that youtube-dl is all good.

In a statement, GitHub’s Director of Platform Policy Abby Vollmer wrote that there are two reasons that it was able to reverse the decision. The first reason is that the RIAA cited one repo that used the youtube-dl source code and contained references to a few copyrighted songs on YouTube. This was only part of a unit test that the code performs. It listens to a few seconds of the song to verify that everything is working properly but it doesn’t download or distribute any material. Regardless, GitHub worked with the developer to patch out the references and stay on the safe side.

As for the primary youtube-dl source code, lawyers at the Electronic Frontier Foundation decided to represent the developers and presented an argument that satisfied GitHub’s concerns that the code circumvents technical measures to protect copyrighted material in violation of Section 1201 of the Digital Millennium Copyright Act. The EFF explained that youtube-dl doesn’t decrypt anything or breakthrough any anti-copying measures. From a technical standpoint, it isn’t much different than a web browser receiving information as intended, and there are plenty of fair use applications for making a copy of materials.

Among the “many legitimate purposes” for using youtube-dl, GitHub listed: “changing playback speeds for accessibility, preserving evidence in the fight for human rights, aiding journalists in fact-checking, and downloading Creative Commons-licensed or public domain videos.” The EFF cited some of the same practical uses and had a few unique additions to its list of benefits, saying that it could be used by “educators to save videos for classroom use, by YouTubers to save backup copies of their own uploaded videos, and by users worldwide to watch videos on hardware that can’t run a standard web browser, or to watch videos in their full resolution over slow or unreliable Internet connections.”

It’s nice to see GitHub evaluating the argument and moving forward without waiting for a legal process to play out, but the company went further in announcing a new eight-step process for evaluating claims related to Section 1201 that will err on the side of developers. GitHub is also establishing a million-dollar legal fund to provide assistance to open source developers fighting off unwarranted takedown notices. Mea culpa, mea culpa!

Finally, the company said that it would work to improve the law around DMCA notices and it will be “advocating specifically on the anti-circumvention provisions of the DMCA to promote developers’ freedom to build socially beneficial tools like youtube-dl.”

Along with today’s announcement, GitHub CEO Nat Friedman tweeted, “Section 1201 of the DMCA is broken and needs to be fixed. Developers should have the freedom to tinker.”

Source: GitHub Restores YouTube Downloader Following DMCA Takedown

It’s nice to see a large company come down on the right side of copyright for a change.

Worn-out NAND flash blamed for Tesla vehicle gremlins, such as rearview cam failures and silenced audio alerts

Worn-out NAND memory chips can cause a whole host of problems with some Tesla cars, ranging from the failure of the rearview camera to an absence of turn signal chimes and other audio alerts, a watchdog warned this month.

Some 159,000 Tesla Model S and Model X vehicles built between 2012 and 2018 are at risk, we’re told. These all use an infotainment system powered by Nvidia’s Tegra 3 system-on-chips that include 8GB of eMMC NAND storage, which is typically found in phones and cheap laptops. The trouble is that these flash chips are wearing out, having hit their program-erase cycle limits, and are unable to reliably store data, causing glitches in operation. The storage controllers can no longer find good working NAND blocks to use, and thus fail.

According to a probe [PDF] by investigators for Uncle Sam’s National Highway Traffic Safety Administration (NHTSA), at least 30 per cent of the infotainment systems made in “certain build months” are failing due to the eMMC flash being worn out, typically after “three to four years in service.”

According to the safety administration, this storage breakdown can “result in loss of rearview/backup camera, loss of HVAC (defogging) setting controls (if the HVAC status was OFF status prior to failure.) There is also an impact on the advanced driver assistance support (ADAS), Autopilot system, and turn signal functionality due to the possible loss of audible chimes, driver sensing, and alerts associated with these vehicle functions.”

This is based on 16,000 complaints and infotainment hardware replacement requests submitted by Tesla owners to the automaker. T

[…]

Source: Worn-out NAND flash blamed for Tesla vehicle gremlins, such as rearview cam failures and silenced audio alerts • The Register

Nice one, Musk

NSA Spied On Denmark As It Chose Its Future Fighter Aircraft: Report – also FR, NL, DE, NO, SE

Reports in the Danish media allege that the United States spied on the country’s government and its defense industry, as well as other European defense contractors, in an attempt to gain information on its fighter acquisition program. The revelations, published online by DR, Denmark’s Danish public-service broadcaster, concern the run-up to the fighter competition that was eventually won by the U.S.-made Lockheed Martin F-35 stealth fighter.

The report cites anonymous sources suggesting that the U.S. National Security Agency (NSA) targeted Denmark’s Ministry of Finance, the Ministry of Foreign Affairs, and the defense firm Terma, which also contributes to the F-35 Joint Strike Fighter program.

Allegedly, the NSA sought to conduct espionage using an existing intelligence-sharing agreement between the two countries. Under this agreement, it is said the NSA is able to tap fiber-optic communication cables passing through Denmark and stored by the Danish Defense Intelligence Service, or Forsvarets Efterretningstjeneste (FE). Huge amounts of data sourced from the Danish communication cables are stored in an FE data center, built with U.S. assistance, at Sandagergård on the Danish island of Amager, to which the NSA also has access.

This kind of sharing of confidential data is not that unusual within the intelligence community, in which the NSA is known to trade high-level information with similar agencies within the Five Eyes alliance (Australia, Canada, New Zealand, the United Kingdom, and the United States), as well as other close allies, such as Germany and Japan, for example.

It would be hoped, however, that these relationships would not be used by the NSA to secretly gather information on the countries with which it has agreements, which is exactly what is alleged to have taken place in Denmark.

A source told DR that between 2015 and 2016 the NSA wanted to gather information on the Danish defense company Terma in a “targeted search” ahead of Denmark’s decision on a new fighter jet to replace its current fleet of F-16s. This is the competition that the F-35 won in June 2016.

Flyvevåbnets Fototjeneste

A Danish F-16 painted in the same colors as the upcoming Danish F-35, over the capital, Copenhagen, in October 2020.

According to DR, the NSA used its Xkeyscore system, which trawls through and analyzes global internet data, to seek information on Terma. An unnamed source said that search criteria had included individual email addresses and phone numbers of company employees.

Officially described as part of the NSA’s “lawful foreign signals intelligence collection system,” Xkeyscore is understood to be able to obtain email correspondence, browser history, chat conversations, and call logs.

In this case, the sources also contend that the NSA used its access to Danish communication cables and FE databases to search for communications related to two other companies, Eurofighter GmbH and Saab, who were respectively offering the Typhoon and Gripen multi-role fighters for the Danish F-16 replacement program. While the Gripen was withdrawn from the Danish competition in 2014, the Typhoon remained in the running until the end, alongside the F-35 and the Boeing F/A-18E/F Super Hornet.

[…]

The whistleblower reports are said to have warned the FE leadership about possible illegalities in an intelligence collaboration between Denmark and the United States to drain Danish internet cables of information that the intelligence services could use in their work. Furthermore, the reports allegedly warned that the NSA was also targeting a number of Denmark’s “closest neighbors,” including France, Germany, the Netherlands, Norway, and Sweden and that some of the espionage conducted by the NSA was judged to be “against Danish interests and goals.”

[…]

Regardless of how the FE and the government react to the latest allegations, if they are substantiated, then the terms of the current U.S.-Danish intelligence-sharing agreement may be judged to be in need of at least a major review. If there is any substance to these allegations, then it’s possible other countries that have made controversial choices to select the F-35 may come under new scrutiny, as well.

Source: NSA Spied On Denmark As It Chose Its Future Fighter Aircraft: Report

Army Hires Company To Develop Cyber Defenses For Its Strykers After They Were Hacked

On Nov. 16, 2020, Virginia-based cybersecurity firm Shift5, Inc. announced that it had received a $2.6 million contract from the Army’s Rapid Capabilities and Critical Technologies Office (RCCTO) to “provide unified cybersecurity prototype kits designed to help protect the operational technology of the Army’s Stryker combat vehicle platform.” The company says it first pitched its plan for these kits at RCCTO’s first-ever Innovation Day event in September 2019.

[…]

“Adversaries demonstrated the ability to degrade select capabilities of the ICV-D when operating in a contested cyber environment,” according to an annual report from the Pentagon’s Office of the Director of Operational Test and Evaluation, or DOT&E, covering activities during the 2018 Fiscal Year. “In most cases, the exploited vulnerabilities pre-date the integration of the lethality upgrades.”

The “lethality upgrades” referred to here center on the integration of a turret armed with a 30mm automatic cannon onto the Infantry Carrier Vehicle (ICV) variant of the Stryker, resulting in the Dragoon version. The indication here is that the cyber vulnerabilities were present in systems also found on unmodified ICVs, suggesting that the issues are, or at least were, impacted other Stryker variants, as well.

Source: Army Hires Company To Develop Cyber Defenses For Its Strykers After They Were Hacked