Sonos stands accused of seeking to obtain “excessive” amounts of personal data without valid consent in a complaint filed with the UK’s data watchdog.
The complaint, lodged by tech lawyer George Gardiner in a personal capacity, challenges the Sonos privacy policy’s compliance with the General Data Protection Regulation and the UK’s implementation of that law.
It argues that Sonos had not obtained valid consent from users who were asked to agree to a new privacy policy and had failed to meet privacy-by-design requirements.
The company changed its terms in summer 2017 to allow it to collect more data from its users – ostensibly because it was launching voice services. Sonos said that anyone who didn’t accept the fresh Ts&Cs would no longer be able to download future software updates.
Sonos denied at the time that this was effectively bricking the system, but whichever way you cut it, the move would deprecate the kit of users that didn’t accept the terms. The app controlling the system would also eventually become non-functional.
Gardiner pointed out, however, that security risks and an interest in properly maintaining an expensive system meant there was little practical alternative other than to update the software.
This resulted in a mandatory acceptance of the terms of the privacy policy, rendering any semblance of consent void.
“I have no option but to consent to its privacy policy otherwise I will have over £3,000 worth of useless devices,” he said in a complaint sent to the ICO and shared with The Register.
Users setting up accounts are told: “By clicking on ‘Submit’ you agree to Sonos’ Terms and Conditions and Privacy Policy.” This all-or-nothing approach is contrary to data protection law, he argued.
Sonos collects personal data in the form of name, email address, IP addresses and “information provided by cookies or similar technology”.
The system also collects data on room names assigned by users, the controller device, the operating system of the device a person uses and content source.
Sonos said that collecting and processing this data – a slurp that users cannot opt out of – is necessary for the “ongoing functionality and performance of the product and its ability to interact with various services”.
But Gardiner questioned whether it was really necessary for Sonos to collect this much data, noting that his system worked without it prior to August 2017. He added that he does not own a product that requires voice recognition.
I am in the exact same position – suddenly I had to accept an invasive change of privacy policy and earlier in March I also had to log in with a Sonos account in order to get the kit working (it wouldn’t update without logging in and the app only showed the login and update page). This is not what I signed up for when I bought the (expensive!) products.
We’ve been trying to explain for the past few months just how absolutely insane the new EU Terrorist Content Regulation will be for the internet. Among many other bad provisions, the big one is that it would require content removal within one hour as long as any “competent authority” within the EU sends a notice of content being designated as “terrorist” content. The law is set for a vote in the EU Parliament just next week.
And as if they were attempting to show just how absolutely insane the law would be for the internet, multiple European agencies (we can debate if they’re “competent”) decided to send over 500 totally bogus takedown demands to the Internet Archive last week, claiming it was hosting terrorist propaganda content.
In the past week, the Internet Archive has received a series of email notices from Europol’s European Union Internet Referral Unit (EU IRU) falsely identifying hundreds of URLs on archive.org as “terrorist propaganda”. At least one of these mistaken URLs was also identified as terrorist content in a separate take down notice from the French government’s L’Office Central de Lutte contre la Criminalité liée aux Technologies de l’Information et de la Communication (OCLCTIC).
And, as the Archive explains, there’s simply no way that (1) the site could have complied with the Terrorist Content Regulation had it been law last week when they received the notices, and (2) that they should have blocked all that obviously non-terrorist content.
The Internet Archive has a few staff members that process takedown notices from law enforcement who operate in the Pacific time zone. Most of the falsely identified URLs mentioned here (including the report from the French government) were sent to us in the middle of the night – between midnight and 3am Pacific – and all of the reports were sent outside of the business hours of the Internet Archive.
The one-hour requirement essentially means that we would need to take reported URLs down automatically and do our best to review them after the fact.
It would be bad enough if the mistaken URLs in these examples were for a set of relatively obscure items on our site, but the EU IRU’s lists include some of the most visited pages on archive.org and materials that obviously have high scholarly and research value.
Those are the requests from Europol, who unfortunately likely qualify as a “competent” authority under the law. The Archive also points out the request from both Europol and the French computer crimes unit targeting a page providing commentary on the Quran as being terrorist content. The French agency told the Archive it needed to take down that content within 24 hours or the Archive may get blocked in France.
Seven people, described as having worked in Amazon’s voice review program, told Bloomberg that they sometimes listen to as many as 1,000 recordings per shift, and that the recordings are associated with the customer’s first name, their device’s serial number, and an account number. Among other clips, these employees and contractors said they’ve reviewed recordings of what seemed to be a woman singing in the shower, a child screaming, and a sexual assault. Sometimes, when recordings were difficult to understand — or when they were amusing — team members shared them in an internal chat room, according to Bloomberg.
In an emailed statement to BuzzFeed News, an Amazon spokesperson wrote that “an extremely small sample of Alexa voice recordings” is annotated, and reviewing the audio “helps us train our speech recognition and natural language understanding systems, so Alexa can better understand your requests, and ensure the service works well for everyone.”
[…]
Amazon’s privacy policy says that Alexa’s software provides a variety of data to the company (including your use of Alexa, your Alexa Interactions, and other Alexa-enabled products), but doesn’t explicitly state how employees themselves interact with the data.
Apple and Google, which make two other popular voice-enabled assistants, also employ humans who review audio commands spoken to their devices; both companies say that they anonymize the recordings and don’t associate them with customers’ accounts. Apple’s Siri sends a limited subset of encrypted, anonymous recordings to graders, who label the quality of Siri’s responses. The process is outlined on page 69 of the company’s security white paper. Google also saves and reviews anonymized audio snippets captured by Google Home or Assistant, and distorts the audio.
On an FAQ page, Amazon states that Alexa is not recording all your conversations. Amazon’s Echo smart speakers and the dozens of other Alexa-enabled devices are designed to capture and process audio, but only when a “wake word” — such as “Alexa,” “Amazon,” “Computer,” or “Echo” — is uttered. However, Alexa devices do occasionally capture audio inadvertently and send that audio to Amazon servers or respond to it with triggered actions. In May 2018, an Echo unintentionally sent audio recordings of a woman’s private conversation to one of her husband’s employees.
While the ethics around data collection and consumer privacy have been questioned for years, it wasn’t until Facebook’s Cambridge Analytics scandal that people began to realize how frequently their personal data is shared, transferred, and monetized without their permission.
Cambridge Analytica was by no means an isolated case. Last summer, an AP investigation found that Google’s location tracking remains on even if you turn it off in Google Maps, Search, and other apps. Research from Vanderbilt professor Douglas Schmidt found that Google engages in “passive” data collection, often without the user’s knowledge. His research also showed that Google utilizes data collected from other sources to de-anonymize existing user data.
That’s why we at Digital Content Next, the trade association of online publishers I lead, wrote this Washington Post op-ed, “It isn’t just about Facebook, it’s about Google, too” when Facebook first faced Capitol Hill. It’s also why the descriptor surveillance advertising is increasingly being used to describe Google and Facebook’s advertising businesses, which use personal data to tailor and micro-target ads.
[…]
The results of the study are consistent with our Facebook study: People don’t want surveillance advertising. A majority of consumers indicated they don’t expect to be tracked across Google’s services, let alone be tracked across the web in order to make ads more targeted.
Do you expect Google to collect data about a person’s activities on Google platforms (e.g. Android and Chrome) and apps (e.g. Search, YouTube, Maps, Waze)?
Yes: 48%
No: 52%
Do you expect Google to track a person’s browsing across the web in order to make ads more targeted?
Yes: 43%
No: 57%
Nearly two out of three consumers don’t expect Google to track them across non-Google apps, offline activities from data brokers, or via their location history.
Do you expect Google to collect data about a person’s locations when a person is not using a Google platform or app?
Yes: 34%
No: 66%
Do you expect Google to track a person’s usage of non-Google apps in order to make ads more targeted?
Yes: 36%
No: 64%
Do you expect Google to buy personal information from data companies and merge it with a person’s online usage in order to make ads more targeted?
Yes: 33%
No: 67%
There was only one question where a small majority of respondents felt that Google was acting according to their expectations. That was about Google merging data from search queries with other data it collects on its own services. They also don’t expect Google to connect the data back to the user’s personal account, but only by a small majority. Google began doing both of these in 2016 after previously promising it wouldn’t.
Do you expect Google to collect and merge data about a person’s search activities with activities on its other applications?
Yes: 57%
No: 43%
Do you expect Google to connect a variety of user data from Google apps, non-Google apps, and across the web with that user’s personal Google account?
Yes: 48%
No: 52%
Google’s personal data collection practices affect the more than 2 billion people who use devices running their Android operating software and hundreds of millions more iPhone users who rely on Google for browsing, maps, or search. Most of them expect Google to collect some data about them in exchange for use of services. However, as our research shows, a significant majority of consumers do not expect Google to track their activities across their lives, their locations, on other sites, and on other platforms. And as the AP discovered, Google continues to do some of this even after consumers explicitly turn off tracking.
The pledge by one of the world’s biggest automakers to share its closely guarded patents, the second time it has opened up a technology, is aimed at driving industry uptake of hybrids and fending off the challenge of all-battery electric vehicles(EVs).
Toyota said it would grant licenses on nearly 24,000 patents on technologies used in its Prius, the world’s first mass-produced “green” car, and offer to supply competitors with components including motors, power converters and batteries used in its lower-emissions vehicles.
“We want to look beyond producing finished vehicles,” Toyota Executive Vice President Shigeki Terashi told reporters.
“We want to contribute to an increase in take up (of electric cars) by offering not just our technology but our existing parts and systems to other vehicle makers.”
The Nikkei Asian Review first reported Toyota’s plans to give royalty-free access to hybrid-vehicle patents.
Terashi said that the access excluded patents on its lithium-ion battery technology.
[…]
Toyota is also betting on hydrogen fuel cell vehicles (FCVs) as the ultimate zero-emissions vehicle, and as a result, has lagged many of its rivals in marketing all-battery EVs.
In 2015, it said it would allow access to its FCV-related patents through 2020.
Google is trying out a new “Pilot Program” that puts a row of advertisements on the Android TV home screen. XDA Developers was the first to report on the new phenomenon, saying, “We’re currently seeing reports that it has shown up in Sony smart TVs, the Mi Box 3 from Xiaomi, NVIDIA Shield TV, and others.”
The advertising is a “Sponsored Channel” part of the “Android TV Core Services” app that ships with all Android TV devices. A “Channel” in Android TV parlance means an entire row of thumbnails in the UI will be dedicated to “sponsored” content. Google provided XDA Developers with a statement saying that yes, this is on purpose, but for now it’s a “pilot program.”
Android TV is committed to optimizing and personalizing the entertainment experience at home. As we explore new opportunities to engage the user community, we’re running a pilot program to surface sponsored content on the Android TV home screen.
Sony has tersely worded a support page detailing the “Sponsored channel,” too. There’s no mention here of it being a pilot program. Sony’s page, titled “A sponsored channel has suddenly appeared on my TV Home menu,” says, “This change is included in the latest Android TV Launcher app (Home app) update. The purpose is to help you discover new apps and contents for your TV.”
Sony goes on to say, “This channel is managed by Google” and “the Sponsored channel cannot be customized.” Sony basically could replace the entire page with a “Deal with it” sunglasses gif, and it would send the same message.
Buying a product knowing it has ads in it is one thing, but users on Reddit and elsewhere are understandably angry about ads suddenly being patched into their devices—especially in cases when these devices are multi-thousand-dollar 4K Sony televisions. There is an option to disable the ads if you dig into the settings but users are reporting the ads aren’t staying disabled. For now, uninstalling updates for the “Android TV Core Services” app is the best way to remove the ads.
Remember, for now this is a “pilot program.” So please share your valuable feedback with Google in the comments.
Juozas Kaziukenas’ article “Amazon-Owned Brands Far From Successful” is based on a report he set up called “Amazon Private Label Brands“. This report is oddly disjointed, crossing statistics in and out, changing his metrics at random and finally coming out with a conclusion which is totally at variance with the content of the article. It’s impossible to see where the sales statistics come from and thus can’t be verified. Reviews – and unrelated metric – is used as a proxy for sales success where he doesn’t mention actual sales figures. Yet major news outlets, such as Bloomberg (Most Amazon Brands Are Duds, Not Disrupters, Study Finds), Business Insider (Most Amazon private labels aren’t flying off the shelves yet, but the company is taking huge steps to change that) and many more have apparently taken the conclusion of the article at face value, seemingly without reading the article itself and are publishing this piece as some sort of evidence that Amazon’s monopoly position is not a problem.
In his analysis, he starts out saying that the top 10 most
successful private label brands contribute 81% to total sales at a value of
$7.5 billion in 2018. He then arbitrarily removes 7 of these brands and states
the total sales by private label brands at under $1 billion. For any retailer,
this is a huge turnover. Oddly enough, the next figure presented is that total
retail sales generated online by Amazon is $122.9 billion. A quick off the cuff
guestimate puts the top 10 Amazon private label brands at around 7% of total
online retail. Considering Amazon has 23,142 own products, you would assume the
total Amazon slice of the pie would be quite a bit larger than 7%.
Interestingly, Marketplacepulse has a statistics page where Amazon international marketplace sales are shown to be a staggering $15.55 billion in Q3 2018 alone and North American sales pegged at $34,35 billion in the same quarter. Focussing on the top 10 brands seems again to be wilfully missing a huge amount of online retail revenue on marketplaces owned by Amazon.
Search is then stated to be the primary driver of purchases
and some time is spent looking at click through rates. How he got these figures
is up in the air, but could it be that they were provided by Amazon? Is it
possible that Amazon is, in fact, funding this analysis? While mr Kaziukenas at
some point does mention the related products feature and he does briefly
demonstrate its importance in product visibility, search results for specific
terms are the metric he goes for here.
The study then quickly and embarrassingly shows that in the
lower end of the price spectrum, price is a driving factor. This will return in
the study when it is shown that products like batteries are indeed stealing
customers from other manufacturers.
Product reviews are used as a rating factor for product
success in the study. Reviews are an unrelated metric and the article notes that
where batteries and cables are concerned, Amazon owns the market share even
with a below average rating. Unfortunately, turnover, or any financial metric,
is no longer used to measure product success once the study has passed the
opening paragraphs.
A lot of time is spent on a few randomly selected products,
which are neither cheaper nor better than the competition. He manages to quite
unsurprisingly demonstrate that more expensive, lower quality Amazon products
don’t do quite as well as cheaper, better quality non-Amazon alternative products.
A 6-foot-long HDMI cable is used as an example to prove that cheaper Amazon
products do better than the competition: “AmazonBasics 6 feet HDMI cable sells
for $6.99 and is the number one best-seller HDMI cable on Amazon” (again, how
he knows what the number one best-seller is, is a mystery to me).
Continuing on, the study shows that Amazon does copy products
and the contradictory statements start flying fast and hard. First the quote is given: “In July, a similar
stand appeared at about half the price. The brand: AmazonBasics. Since then,
sales of the Rain Design original have slipped.” followed by the statement:
“Today Rain Design’s laptop stand sells for $39.99 and seems to be outselling
Amazon’s $19.99 copy.” I assume that the “seems to be outselling” part of this
statement is based entirely on the review status and not on any actual sales
data. Next the study claims that this product copying is “rare” and goes on to
state “There is no basis to assume that copying products is part of the Amazon
strategy.” This doesn’t ring very true next to the two examples on display –
and surely many more examples can easily be found. Mr Kaziukenas states: “The
story of Rain Design’s laptop stand is scary but doesn’t happen often.” Again I
would like to see where the metrics being used here come from and the
definition of “often”. It’s stated as though he has actual data on this, but
chooses not to share this. I somehow doubt that Amazon would be happy to
provide him with this data.
Now the study continues to say that having data on the
competition is not useful, but specifies this as a vague “ability to utilize
that data for brand building” and then states that because Amazon isn’t the
first choice in the upper price market, or established brand space, it’s not
utilising this data very well. He then goes on to state that where brand is not
important (the cheap product space, eg. batteries) they are the number one
seller. Let us not forget that this failed brand building of products in the
space beyond the top three products (as arbitrarily chosen by this study in the
beginning) is netting sales of around $6.5 billion!
Now comes a pretty bizarre part where an argument is put
forward that if you use the search by specifying a brand name before the
generic product name, Amazon products are not given an advantage, despite being
shown in the related items. Even though if you put in a generic product name,
Amazon products will come forward and fill the screen, unless you have a
sponsored the search term, as demonstrated by a page full of cheaper Amazon
HDMI cables. This is somehow used as an argument that there is no advantage in
Organic Search Results, an arbitrarily and very narrowly chosen term which has
no relation to the part of the article in which at every turn it is clearly
shown that Amazon uses their advantage to push their products. Totally beside
the wayside is the fact that different people are shown different search
results, depending on a huge multitude of factors. What Mr Kaziukenas sees as
results are not going to be the same as other shoppers on the platform,
although he gives his search results as being that one single truth.
The conclusion of the piece states that Amazon’s private
brand business (ie, those not labelled with the word “Amazon” in it) don’t do
very well. The generic goods business (ie, those where potential customers have
no reason to look specifically for a brand name) is cast aside. Somehow the
final thought is that Amazon therefore doesn’t want to be in the physical
products business. The sheer scale of the sales numbers presented in the
article, however, belie this statement. Amazon is making billions of dollars in
the physical goods segment and is using its position to push out competitors –
to make no mention of the magic arbitration system of goods and fraud on the
market place, the conflict of interest in being both a marketplace and a salesman
in that marketplace: but that’s another story, covered by other articles.
8/4/19 EDIT:
If it feels like your Amazon search results have been overwhelmed with promotions for their private-label brands, like Amazon Basics, Mama Bear or Daily Ritual, that may be changing. As lawmakers pay more attention to the most powerful tech companies, Amazon has begun quietly removing some of the more obvious promotions, including banner ads, for its private-label products, reports CNBC, which spoke to Amazon sellers and consultants.
Amazon’s aggressive marketing of its own private brands, with ads that often appear in search results above listings for competing items from third-party sellers, have raised antitrust concerns.
While
Amazon benefits from higher margins, cost-savings from a more efficient
supply chain and new data, third-party sellers often suffer. For
example, they may have to cut prices to stay competitive, and even lower
prices may not be enough to attract customers away from Amazon’s
promotions for its own items, which show up in many search results.
Of course the US can look in, under CLOUD rules, because Google is an American company. The move of the files has been done without consent from the patients by Medical Research Data Management, a commercial company, because (they say), the hospitals have given permission. Also, hospitals don’t need to ask for patient permission, because patients have given hospitals permission through accepting the electronic patient filing system.
Another concern is the pseudo-anonymisation of the data. For a company like Google, it’s won’t be particularly hard to match the data to real people.
According to a letter obtained by Variety, the chief of the DOJ’s Antitrust Division, Makan Delrahim, wrote to AMPAS CEO Dawn Hudson on March 21 to express concerns that new rules would be written “in a way that tends to suppress competition.”
“In the event that the Academy — an association that includes multiple competitors in its membership — establishes certain eligibility requirements for the Oscars that eliminate competition without procompetitive justification, such conduct may raise antitrust concerns,” Delrahim wrote.
The letter came in response to reports that Steven Spielberg, an Academy board member, was planning to push for rules changes to Oscars eligibility, restricting movies that debut on Netflix and other streaming services around the same time that they show in theaters. Netflix made a big splash at the Oscars this year, as the movie “Roma” won best director, best foreign language film and best cinematography.
[…]
Spielberg’s concerns over the eligibility of movies on streaming platforms have triggered intense debate in the industry. Netflix responded on Twitter early last month with the statement, “We love cinema. Here are some things we also love. Access for people who can’t always afford, or live in towns without, theaters. Letting everyone, everywhere enjoy releases at the same time. Giving filmmakers more ways to share art. These things are not mutually exclusive.”
Spielberg told ITV News last year that Netflix and other streaming platforms have boosted the quality of television, but “once you commit to a television format, you’re a TV movie. … If it’s a good show—deserve an Emmy, but not an Oscar.”
A group of American hackers who once worked for U.S. intelligence agencies helped the United Arab Emirates spy on a BBC host, the chairman of Al Jazeera and other prominent Arab media figures during a tense 2017 confrontation pitting the UAE and its allies against the Gulf state of Qatar.
The American operatives worked for Project Raven, a secret Emirati intelligence program that spied on dissidents, militants and political opponents of the UAE monarchy. A Reuters investigation in January revealed Project Raven’s existence and inner workings, including the fact that it surveilled a British activist and several unnamed U.S. journalists.
The Raven operatives — who included at least nine former employees of the U.S. National Security Agency and the U.S. military — found themselves thrust into the thick of a high-stakes dispute among America’s Gulf allies. The Americans’ role in the UAE-Qatar imbroglio highlights how former U.S. intelligence officials have become key players in the cyber wars of other nations, with little oversight from Washington.
[…]
Dana Shell Smith, the former U.S. ambassador to Qatar, said she found it alarming that American intelligence veterans were able to work for another government in targeting an American ally. She said Washington should better supervise U.S. government-trained hackers after they leave the intelligence community.
“Folks with these skill sets should not be able to knowingly or unknowingly undermine U.S. interests or contradict U.S. values,” Smith told Reuters.
Wait, so once you are trained for something by the US government, basically you have entered into an enslaved indenture? You may only work for who the US decides you may work for ever after? Or… what, they assassinate you?
WASHINGTON — The Drug Enforcement Administration secretly collected data in bulk about Americans’ purchases of money-counting machines — and took steps to hide the effort from defendants and courts — before quietly shuttering the program in 2013 amid the uproar over the disclosures by the National Security Agency contractor Edward Snowden, an inspector general report found.
Seeking leads about who might be a drug trafficker, the D.E.A. started in 2008 to issue blanket administrative subpoenas to vendors to learn who was buying money counters. The subpoenas involved no court oversight and were not pegged to any particular investigation. The agency collected tens of thousands of records showing the names and addresses of people who bought the devices.
The public version of the report, which portrayed the program as legally questionable, blacked out the device whose purchase the D.E.A. had tracked. But in a slip-up, the report contained one uncensored reference in a section about how D.E.A. policy called for withholding from official case files the fact that agents first learned the names of suspects from its database of its money-counter purchases.
[…]
The report cited field offices’ complaints that the program had wasted time with a high volume of low-quality leads, resulting in agents scrutinizing people “without any connection to illicit activity.” But the D.E.A. eventually refined its analysis to produce fewer but higher-quality leads, and the D.E.A. said it had led to arrests and seizures of drugs, guns, cars and illicit cash.
The idea for the nationwide program originated in a D.E.A. operation in Chicago, when a subpoena for three months of purchase records from a local store led to two arrests and “significant seizures of drugs and related proceeds,” it said.
But Sarah St. Vincent, a Human Rights Watch researcher who flagged the slip-up on Twitter, argued that it was an abuse to suck Americans’ names into a database that would be analyzed to identify criminal suspects, based solely upon their purchase of a lawful product.
[…]
In the spring of 2013, the report said, the D.E.A. submitted its database to a joint operations hub where law enforcement agencies working together on organized crime and drug enforcement could mine it. But F.B.I. agents questioned whether the data had been lawfully acquired, and the bureau banned its officials from gaining access to it.
The F.B.I. agents “explained that running all of these names, which had been collected without foundation, through a massive government database and producing comprehensive intelligence products on any ‘hits,’ which included detailed information on family members and pictures, ‘didn’t sit right,’” the report said.
Academic and scientific research needs to be accessible to all. The world’s most pressing problems like clean water or food security deserve to have as many people as possible solving their complexities. Yet our current academic research system has no interest in harnessing our collective intelligence. Scientific progress is currently thwarted by one thing: paywalls.
Paywalls, which restrict access to content without a paid subscription, represent a common practice used by academic publishers to block access to scientific research for those who have not paid. This keeps £19.6bn flowing from higher education and science into for-profit publisher bank accounts. My recent documentary, Paywall: The Business of Scholarship, uncovered that the largest academic publisher, Elsevier, regularly has a profit margin between 35-40%, which is greater than Google’s. With financial capacity comes power, lobbyists, and the ability to manipulate markets for strategic advantages – things that underfunded universities and libraries in poorer countries do not have.
Furthermore, university librarians are regularly required to sign non-disclosure agreements on their contract-pricing specifics with the largest for-profit publishers. Each contract is tailored specifically to that university based upon a variety of factors: history, endowment, current enrolment. This thwarts any collective discussion around price structures, and gives publishers all the power.
This is why open access to research matters – and there have been several encouraging steps in the right direction. Plan S, which requires that scientific publications funded by public grants must be published in open access journals or platforms by 2020, is gaining momentum among academics across the globe. It’s been recently backed by Italy’s Compagnia di San Paolo, which receives €150m annually to spend on research, as well as the African Academy of Science and the National Science and Technology Council (NSTC) of Zambia. Plan S has also been endorsed by the Chinese government.
Equally, although the US has lagged behind Europe in taking a stand on encouraging open access to research, this is changing. The University of California system has just announced that it will be ending its longstanding subscription to Elsevier. The state of California also recently passed AB 2192, a law that requires anything funded by the state to be made open access within one year of publication. In January, the US President, Donald Trump, signed into law the Open, Public, Electronic and Necessary (OPEN) Government Data Act, which mandates that US federal agencies publish all non-sensitive government data under an open format. This could cause a ripple effect in other countries and organisations.
But there is a role for individual academics to play in promoting open access, too. All academics need to be familiar with their options and to stop signing over copyright unnecessarily. Authors should be aware they can make a copy of their draft manuscript accessible in some form in addition to the finalised manuscript submitted to publishers. There are helpful resources, such as Authors Alliance which helps researchers manage their rights, and Sherpa/RoMEO, which navigates permissions of individual publishers and author rights. In many cases, researchers can also make their historical catalogue of articles available to the public.
Without an academic collective voice demanding open access to their research, the movement will never completely take off. It’s a case of either giving broad society access to scientific advances or allowing these breakthroughs to stay locked away for financial gain. For the majority of academics, the choice should be easy.
Many other cars download and store data from users, particularly information from paired cellphones, such as contact information. The practice is widespread enough that the US Federal Trade Commission has issued advisories to drivers warning them about pairing devices to rental cars, and urging them to learn how to wipe their cars’ systems clean before returning a rental or selling a car they owned.
But the researchers’ findings highlight how Tesla is full of contradictions on privacy and cybersecurity. On one hand, Tesla holds car-generated data closely, and has fought customers in court to refrain from giving up vehicle data. Owners must purchase $995 cables and download a software kit from Tesla to get limited information out of their cars via “event data recorders” there, should they need this for legal, insurance or other reasons.
At the same time, crashed Teslas that are sent to salvage can yield unencrypted and personally revealing data to anyone who takes possession of the car’s computer and knows how to extract it.
[…]
In general, cars have become rolling computers that slurp up personal data from users’ mobile devices to enable “infotainment” features or services. Additional data generated by the car enables and trains advanced driver-assistance systems. Major auto-makers that compete with Tesla’s Autopilot include GM’s Cadillac Super Cruise, Nissan Infiniti’s ProPilot Assist and Volvo’s Pilot Assist system.
But GreenTheOnly and Theo noted that in Teslas, dashboard cameras and selfie cameras can record while the car is parked, even in your garage, and there is no way for an owner to know when they may be doing so. The cameras enable desirable features like “sentry mode.” They also enable wipers to “see” raindrops and switch on automatically, for example.
GreenTheOnly explained, “Tesla is not super transparent about what and when they are recording, and storing on internal systems. You can opt out of all data collection. But then you lose [over-the-air software updates] and a bunch of other functionality. So, understandably, nobody does that, and I also begrudgingly accepted it.”
Theo and GreenTheOnly also said Model 3, Model S and Model X vehicles try to upload autopilot and other data to Tesla in the event of a crash. The cars have the capability to upload other data, but the researchers don’t know if and under what circumstances they attempt to do so.
[…]
The company is one of a handful of large corporations to openly court cybersecurity professionals to its networks, urging those who find flaws in Tesla systems to report them in an orderly process — one that gives the company time to fix the problem before it is disclosed. Tesla routinely pays out five-figure sums to individuals who find and successfully report these flaws.
[…]
However, according to two former Tesla service employees who requested anonymity, when owners try to analyze or modify their own vehicles’ systems, the company may flag them as hackers, alerting Telsa of their skills. Tesla then ensures that these flagged people are not among the first to get new software updates.
UK cops’ sharing of data with the Home Office will be probed by oversight bodies following a super-complaint from civil rights groups, it was confirmed today.
At the heart of the issue is the way that victims’ and witnesses’ data collected by the police are shared with central government immigration teams.
Liberty and Southall Black Sisters last year lodged a super-complaint against the “systemic and potentially unlawful” practices, which allowed criminals to “weaponise” their victims” immigration status.
An investigation by the rights groups found that victims and witnesses were “frequently reported to immigration enforcement after reporting very serious crimes to the police”.
This, Liberty said, risked deterring people – even those who do not have uncertain immigration statuses – from reporting crime, especially as the victims or witnesses “can be coerced into not reporting” crimes.
[…]
“The only acceptable solution is the formal creation of a ‘firewall’ – a cast-iron promise that personal information collected about victims and witnesses by public services like the police will not be shared with the Home Office for immigration enforcement purposes.”
Liberty proposed this “firewall” idea in its December report into public sector data sharing, arguing that this was the only way to mitigate against the negative impacts of the government’s hostile-environment policies.
The group has repeatedly emphasised these impacts go beyond undocumented migrants, but also affect migrants with regular status “who live in a climate of uncertainty and fear” as well as frontline workers in affected professions.
This was exemplified in last year’s battle to scrap a deal that saw non-clinical patient records shared with the Home Office as GPs voiced concerns it would break the doctor-patient confidentiality and could stop migrants seeking medical treatment
Speed limiting technology looks set to become mandatory for all vehicles sold in Europe from 2022, after new rules were provisionally agreed by the EU.
The Department for Transport said the system would also apply in the UK, despite Brexit.
Campaigners welcomed the move, saying it would save thousands of lives.
Road safety charity Brake called it a “landmark day”, but the AA said “a little speed” helped with overtaking or joining motorways.
Safety measures approved by the European Commission included intelligent speed assistance (ISA), advanced emergency braking and lane-keeping technology.
The EU says the plan could help avoid 140,000 serious injuries by 2038 and aims ultimately to cut road deaths to zero by 2050.
EU Commissioner Elzbieta Bienkowska said: “Every year, 25,000 people lose their lives on our roads. The vast majority of these accidents are caused by human error.
“With the new advanced safety features that will become mandatory, we can have the same kind of impact as when safety belts were first introduced.”
What is speed limiting technology and how does it work?
Under the ISA system, cars receive information via GPS and a digital map, telling the vehicle what the speed limit is.
This can be combined with a video camera capable of recognising road signs.
The system can be overridden temporarily. If a car is overtaking a lorry on a motorway and enters a lower speed-limit area, the driver can push down hard on the accelerator to complete the manoeuvre.
A full on/off switch for the system is also envisaged, but this would lapse every time the vehicle is restarted.
How soon will it become available?
It’s already coming into use. Ford, Mercedes-Benz, Peugeot-Citroen, Renault and Volvo already have models available with some of the ISA technology fitted.
However, there is concern over whether current technology is sufficiently advanced for the system to work effectively.
In particular, many cars already have a forward-facing camera, but there is a question mark over whether the sign-recognition technology is up to scratch.
Other approved safety features for European cars, vans, trucks and buses include technology which provides a warning of driver drowsiness and distraction, such as when using a smartphone while driving, and a data recorder in case of an accident.
Media captionTheo Leggett: ‘The car brought us to a controlled halt’
What does it all mean in practice?
Theo Leggett, business correspondent
The idea that cars will be fitted with speed limiters – or to put it more accurately, “intelligent speed assistance” – is likely to upset a lot of drivers. Many of us are happy to break limits when it suits us and don’t like the idea of Big Brother stepping in.
However, the new system as it’s currently envisaged will not force drivers to slow down. It is there to encourage them to do so, and to make them aware of what the limit is, but it can be overridden. Much like the cruise control in many current cars will hold a particular speed, or prevent you exceeding it, until you stamp on the accelerator.
So it’ll still be a free-for-all for speeding motorists then? Not quite. Under the new rules, cars will also be fitted with compulsory data recorders, or “black boxes”.
So if you have an accident, the police and your insurance company will know whether you’ve been going too fast. If you’ve been keeping your foot down and routinely ignoring the car’s warnings, they may take a very dim view of your actions.
In fact, it’s this “spy on board” which may ultimately have a bigger impact on driver behaviour than any kind of speed limiter. It’s easy to get away with reckless driving when there’s only a handful of traffic cops around to stop you. Much harder when there’s a spy in the cab recording your every move.
On Tuesday, after years of negotiation and lobbying, and outcry and protests by activists online, members of the EU parliament voted to adopt the Directive on copyright in the Digital Single Market, [PDF] – a collection of rules that ostensibly aim “to ensure that the longstanding rights and obligations of copyright law also apply to the internet,” as the European Parliament puts it.
By “internet,” EU officials are talking mainly about Facebook and Google, though not exclusively. Everyone using the internet in Europe and every company doing business there will be affected in some way, though no one is quite sure how. And therein lies the problem.
“When this first came up, even the original language was so difficult to imagine being successfully implemented, that it was hard to believe anyone would even try to pass it into law,” said Danny O’Brien, international director of the Electronic Frontier Foundation (EFF) in a phone interview with The Register. “Now after it has gone through the mincing machine of the negotiation, it’s even more incoherent.”
What’s in a name?
Among the rules adopted, two have received the lion’s share of attention: Article 15 and Article 17, which used to be called Article 13 and Article 15 until someone had the clever idea to renumber them.
Article 15 (née 13) will require news aggregators like Google News that want to display content from news providers to obtain a license for anything more than “very short extracts.” Google, predictably, has opposed the plan.
Article 15 has been derided as a “link tax” that will damage small publishers and news-related startups.
That’s not true, the European Parliament insists, noting that hyperlinking has explicitly been exempted in the directive.
As for paying up, Google and other content aggregators may choose to shun publishers that demand payment or bestow a competitive advantage (e.g. ranking) to publishers offering favorable licensing terms. Given how publishers in Europe have regretted the loss of visitor traffic that follows from Google excommunication, they may prefer low- or no-cost licensing to obscurity.
Article 17 (née 15) allows websites to be sued for copyright violations by their users, which websites in the US can avoid thanks to Section 230 of the Communications Decency Act.
Article 17, it’s been said, will require internet companies to adopt upload filters to prevent copyright liability arising from users. Essentially, filters may be needed to stop folks submitted copyrighted work to social networks, forums, online platforms, and other sites. That’s a possibility, but not a certainty.
“The draft directive however does not specify or list what tools, human resources or infrastructure may be needed to prevent unremunerated material appearing on the site,” the European Commission explains.
“There is therefore no requirement for upload filters. However, if large platforms do not come up with any innovative solutions, they may end up opting for filters.”
Researchers in Canada, the U.S., and Australia teamed up for the study, published Wednesday in the BMJ. They tested 24 popular health-related apps used by patients and doctors in those three countries on an Android smartphone (the Google Pixel 1). Among the more popular apps were medical reference site Medscape, symptom-checker Ada, and the drug guide Drugs.com. Some of the apps reminded users when to take their prescriptions, while others provided information on drugs or symptoms of illness.
They then created four fake profiles that used each of the apps as intended. To establish a baseline of where network traffic related to user data was relayed during the use of the app, they used each app 14 times with the same profile information. Then, prior to the 15th use, they made a subtle change to this user information. On this final use, they looked for differences in network traffic, which would indicate that user data obtained by the app was being shared with third parties, and where exactly it was going to.
Overall, they found 79 percent of apps, including the three listed above, shared at least some user data outside of the app itself. While some of the unique entities that had access to the data used it to improve the app’s functions, like maintaining the cloud where data could be uploaded by users or handling error reports, others were likely using it to create tailored advertisements for other companies. When looking at these third parties, the researchers also found that many marketed their ability to bundle together user data and share it with fourth-party companies even further removed from the health industry, such as credit reporting agencies. And while this data is said to be made completely anonymous and de-identified, the authors found that certain companies were given enough data to easily piece together the identity of users if they wanted to.
About 1,600 people have been secretly filmed in motel rooms in South Korea, with the footage live-streamed online for paying customers to watch, police said Wednesday.
Two men have been arrested and another pair investigated in connection with the scandal, which involved 42 rooms in 30 accommodations in 10 cities around the country. Police said there was no indication the businesses were complicit in the scheme.
In South Korea, small hotels of the type involved in this case are generally referred to as motels or inns.
Cameras were hidden inside digital TV boxes, wall sockets and hairdryer holders and the footage was streamed online, the Cyber Investigation Department at the National Police Agency said in a statement.
Cameras found by police hidden inside a hotel wall outlet (left) and hair dryer stand (right).
The site had more than 4,000 members, 97 of whom paid a $44.95 monthly fee to access extra features, such as the ability to replay certain live streams. Between November 2018 and this month, police said, the service brought in upward of $6,000.
“There was a similar case in the past where illegal cameras were (secretly installed) and were consistently and secretly watched, but this is the first time the police caught where videos were broadcast live on the internet,” police said.
Apple CEO Tim Cook has been more than clear that services like the iOS App Store are an essential part of the company’s future as consumers hang onto devices for longer and longer periods between upgrades. When Spotify filed an antitrust lawsuit against Apple this week, it fired a direct shot at the tech giant’s strategy. Now, Apple has issued its rebuttal to Spotify’s accusations.
Spotify has had its gripes with the App Store on and off for many years. Apple charges apps a fee for “digital goods and services that are purchased inside the app.” In the case of a subscription service like Spotify’s ad-free premium package, that fee is 30 percent for the first year and 15 percent for each additional year. Most apps that charge for digital services just deal with it and cough up the fee. Because iOS is a walled garden, it’s not possible to offer an alternative place to download an app with purchases that avoid Apple’s fees.
If a company is big enough to take the risk, however, it’s possible to get users to enter their payments through a web browser and then link their accounts to the app without handing over fees to Apple. That’s the approach that Spotify and Netflix have decided to take.
But Spotify is tired of giving users an inconvenient method for signing up and paying for its premium service. The company announced this week that it has filed an antitrust lawsuit with the European Commission, accusing Apple of anti-competitive behavior. In response to Spotify CEO Daniel Ek’s blog post explaining his positions, Apple published its rebuttal on Thursday.
The Apple post spends a lot of time explaining its philosophy regarding the app store and goes on at length about empowering developers and creating a platform from scratch—window dressing arguments, in other words. When it came to specifics, Apple straight up denied a few of Spotify’s claims.
For one thing, Spotify claims that because it doesn’t use Apple’s payment system it is routinely penalized with technical and experiential limitations. Ek explained that “over time, this has included locking Spotify and other competitors out of Apple services such as Siri, HomePod, and Apple Watch.” Apple said that it has actively encouraged Spotify to expand its reach on Siri and AirPlay 2 and were told that the company was “working on it.” As for the Apple Watch, it said the claim was “especially surprising” because the Spotify Watch app is currently the number one app in the Watch Music category. Apple spelled out its position in clear terms, saying, “Spotify is free to build apps for—and compete on—our products and platforms, and we hope they do.”
Apple went on to quibble with some other claims that Spotify made, but it failed to address a couple of points. Ek complained that “numerous other apps on the App Store, like Uber or Deliveroo,” don’t have to pay “the Apple tax.” On that point, Apple’s policy is that it only charges for “digital goods and services that are purchased inside the app,” not services that are offered outside in the real world. Whether or not it should apply its fees to everyone regardless of their source of revenue is a topic that’s up for debate.
But as VentureBeat noted, the most glaring omission from Apple’s blog post is that it doesn’t mention Apple Music at all. The crux of Spotify’s argument is that it is directly competing with Apple’s music streaming service but the 30 percent fee requires it to inflate its prices. Since Apple doesn’t have to pay any fees to itself, Spotify believes it has an unfair competitive advantage.
Apple did not immediately respond to our request for comment on this story, but a spokesperson for Spotify sent us the following statement:
Every monopolist will suggest they have done nothing wrong and will argue that they have the best interests of competitors and consumers at heart. In that way, Apple’s response to our complaint before the European Commission is not new and is entirely in line with our expectations.
We filed our complaint because Apple’s actions hurt competition and consumers, and are in clear violation of the law. This is evident in Apple’s belief that Spotify’s users on iOS are Apple customers and not Spotify customers, which goes to the very heart of the issue with Apple. We respect the process the European Commission must now undertake to conduct its review. Please visit www.TimetoPlayFair.com for the facts of our case.
The thing is, Apple is fighting this war on a few fronts. In the coming months, the Supreme Court is expected to rule on a similar case that argues that in the absence of an alternative app store on iOS, the 30 percent fee amounts to a hidden tax on consumers because developers have to bake the fee into their pricing. It appears that Apple wants to keep its arguments focused on the store as a whole rather than directly engaging with points about its own apps.
Aside from the fact that this is probably Spotify’s best angle on the case, Apple may want to avoid the Apple Music argument because it’s also facing calls from Senator Elizabeth Warren to “break up” the App Store. Though Apple has been a minor focus of Warren’s tech policy proposals, she believes that the company shouldn’t be allowed to put its own products in its exclusive store because it can hobble competitors through the kinds of practices that Spotify is describing. “Either they run the platform or they play in the store,” Warren told The Verge. “They don’t get to do both at the same time.”
In the past, I’ve argued that the benefits of Apple’s approach to the App Store outweigh the downsides. I still think that’s true and if you don’t like the Apple way, then you can go use the many other devices available on the market. But I have to admit that Spotify’s specific case has understandable merit. And it is possible that the European Commission’s hard-nosed attitude towards antitrust could work in Spotify’s favor. Though the cases are slightly different, regulators in Europe did rule that Google’s inclusion of the Chrome browser pre-installed on Android devices gave it an unfair advantage.
EU plans to ban the sale of user-moddable radio frequency devices – like phones and routers – have provoked widespread condemnation from across the political bloc.
The controversy centres on Article 3(3)(i) of the EU Radio Equipment Directive, which was passed into law back in 2014.
However, an EU working group is now about to define precisely which devices will be subject to the directive – and academics, researchers, individual “makers” and software companies are worried that their activities and business models will be outlawed.
Article 3(3)(i) states that RF gear sold in the EU must support “certain features in order to ensure that software can only be loaded into the radio equipment where the compliance of the combination of the radio equipment and software has been demonstrated”.
If the law is implemented in its most potentially harmful form, no third-party firmware could be installed onto something like a home router, for example.
Hauke Mehrtens of the Free Software Foundation Europe (FSFE) told The Register: “If the EU forces Wi-Fi router manufacturers to prevent their customers from installing their own software onto their devices this will cause great harm to the OpenWrt project, wireless community networks, innovative startups, computer network researchers and European citizens. This would increase the electronic waste, make it impossible for the user to fix security vulnerabilities by himself or the help of the community and block research which could improve the internet in the EU.”
One photojournalist said she was pulled into secondary inspections three times and asked questions about who she saw and photographed in Tijuana shelters. Another photojournalist said she spent 13 hours detained by Mexican authorities when she tried to cross the border into Mexico City. Eventually, she was denied entry into Mexico and sent back to the U.S.
These American photojournalists and attorneys said they suspected the U.S. government was monitoring them closely but until now, they couldn’t prove it.
Now, documents leaked to NBC 7 Investigates show their fears weren’t baseless. In fact, their own government had listed their names in a secret database of targets, where agents collected information on them. Some had alerts placed on their passports, keeping at least three photojournalists and an attorney from entering Mexico to work.
The documents were provided to NBC 7 by a Homeland Security source on the condition of anonymity, given the sensitive nature of what they were divulging.
The source said the documents or screenshots show a SharePoint application that was used by agents from Customs and Border Protection (CBP) Immigration and Customs Enforcement (ICE), the U.S. Border Patrol, Homeland Security Investigations and some agents from the San Diego sector of the Federal Bureau of Investigations (FBI).
The intelligence gathering efforts were done under the umbrella of “Operation Secure Line,” the operation designated to monitor the migrant caravan, according to the source.
The documents list people who officials think should be targeted for screening at the border.
The individuals listed include ten journalists, seven of whom are U.S. citizens, a U.S. attorney, and 47 people from the U.S. and other countries, labeled as organizers, instigators or their roles “unknown.” The target list includes advocates from organizations like Border Angels and Pueblo Sin Fronteras.
To view the documents, click here or the link below.
NBC 7 Investigates is blurring the names and photos of individuals who haven’t given us permission to publish their information.
[…]
In addition to flagging the individuals for secondary screenings, the Homeland Security source told NBC 7 that the agents also created dossiers on each person listed.
“We are a criminal investigation agency, we’re not an intelligence agency,” the Homeland Security source told NBC 7 Investigates. “We can’t create dossiers on people and they’re creating dossiers. This is an abuse of the Border Search Authority.”
One dossier, shared with NBC 7, was on Nicole Ramos, the Refugee Director and attorney for Al Otro Lado, a law center for migrants and refugees in Tijuana, Mexico. The dossier included personal details on Ramos, including specific details about the car she drives, her mother’s name, and her work and travel history.
After sharing the documents with Ramos, she said Al Otro Lado is seeking more information on why she and other attorneys at the law center have been targeted by border officials.
“The document appears to prove what we have assumed for some time, which is that we are on a law enforcement list designed to retaliate against human rights defenders who work with asylum seekers and who are critical of CBP practices that violate the rights of asylum seekers,” Ramos told NBC 7 by email.
In addition to the dossier on Ramos, a list of other dossier files created was shared with NBC 7. Two of the dossier files were labeled with the names of journalists but no further details were available. Those journalists were also listed as targets for secondary screenings.
Customs and Border Protection has the authority to pull anyone into secondary screenings, but the documents show the agency is increasingly targeting journalists, attorneys, and immigration advocates. Former counterterrorism officials say the agency should not be targeting individuals based on their profession.
This time, the Silicon Valley giant has been caught red-handed using people’s cellphone numbers, provided exclusively for two-factor authentication, for targeted advertising and search – after it previously insinuated it wouldn’t do that.
Folks handing over their mobile numbers to protect their accounts from takeovers and hijackings thought the contact detail would be used for just that: security. Instead, Facebook is using the numbers to link netizens to other people, and target them with online ads.
For example, if someone you know – let’s call her Sarah – has given her number to Facebook for two-factor authentication purposes, and you allow the Facebook app to access your smartphone’s contacts book, and it sees Sarah’s number in there, it will offer to connect you two up, even though Sarah thought her number was being used for security only, and not for search. This is not a particularly healthy scenario, for instance, if you and Sarah are no longer, or never were, friends in real life, and yet Facebook wants to wire you up anyway.
Following online outcry over the weekend, a Facebook spokesperson told us today: “We appreciate the feedback we’ve received about these settings, and will take it into account.”
Shazam, the song identification app Apple bought for $400M, recently released an update to its iOS app that got rid of all 3rd party SDKs the app was using except for one.
The SDKs that were removed include ad networks, analytics trackers, and even open-source utilities. Why, you ask? Because all of those SDKs leak usage data to 3rd parties one way or another, something Apple really really dislikes.
Here are all the SDKs that were uninstalled in the latest update:
AdMob
Bolts
DoubleClick
FB Ads
FB Analytics
FB Login
InMobi
IAS
Moat
MoPub
Right now, the app only has one 3rd party SDK installed and that’s HockeyApp. Microsoft’s version of TestFlight. It’s unclear why it’s still there, but we don’t expect it to stick around for too long.
Looking across Apple’s entire app portfolio it’s very uncommon to see 3rd party SDKs at all. Exceptions exist. One such example is Apple’s Support app which has the Adobe Analytics SDK installed.
Things Are Different on Android
Since Shazam is also available for Android we expected to see the same behavior. A mass uninstall of 3rd party SDKs. At first glance it seems to be the case, but not exactly.
Here are all the SDKs that were uninstalled in the last update:
AdColony
AdMob
Amazon Ads
Ads
FB Analytics
Gimbal
Google IMA
MoPub
Here are all the SDKs that are still installed in Shazam for Android:
Bolts
FB Analytics
Butter Knife
Crashlytics
Fabric
Firebase
Google Maps
OKHttp
Otto
On Android, Apple seems to be ok with leaking usage data to both Facebook through the Facebook Login SDK and Google through Fabric and Google Maps, indicating Apple hasn’t built out its internal set of tools for Android.
It’s also worth noting that HockeyApp was removed from Shazam from Android more than a year ago.
Facebook receives highly personal information from apps that track your health and help you find a new home, testing by The Wall Street Journal found. Facebook can receive this data from certain apps even if the user does not have a Facebook account, according to the Journal.
Facebook has already been in hot water concerning issues of consent and user data.
Most recently, a TechCrunch report revealed in January that Facebook paid users as young as teenagers to install an app that would allow the company to collect all phone and web activity. Following the report, Applerevoked some developer privileges from Facebook, saying Facebook violated its terms by distributing the app through a program meant only for employees to test apps prior to release.
The new report said Facebook is able to receive data from a variety of apps. Of more than 70 popular apps tested by the Journal, they found at least 11 apps that sent potentially sensitive information to Facebook.
The apps included the period-tracking app Flo Period & Ovulation Tracker, which reportedly shared with Facebook when users were having their periods or when they indicated they were trying to get pregnant. Real estate app Realtor reportedly sent Facebook the listing information viewed by users, and the top heart-rate app on Apple’s iOS, Instant Heart Rate: HR Monitor, sent users’ heart rates to the company, the Journal’s testing found.
The apps reportedly send the data using Facebook’s software-development kit, or SDK, which help developers integrate certain features into their apps. Facebook’s SDK includes an analytics service that helps app developers understand its users’ trends. The Journal said developers who sent sensitive information to Facebook used “custom app events” to send data like ovulation times and homes that users had marked as favorites on some apps.
A Facebook spokesperson told CNBC, “Sharing information across apps on your iPhone or Android device is how mobile advertising works and is industry standard practice. The issue is how apps use information for online advertising. We require app developers to be clear with their users about the information they are sharing with us, and we prohibit app developers from sending us sensitive data. We also take steps to detect and remove data that should not be shared with us.”
Earlier this month, security researcher Victor Gevers found and disclosed an exposed database live-tracking the locations of about 2.6 million residents of Xinjiang, China, offering a window into what a digital surveillance state looks like in the 21st century.
Xinjiang is China’s largest province, and home to China’s Uighurs, a Turkic minority group. Here, the Chinese government has implemented a testbed police state where an estimated 1 million individuals from these minority groups have been arbitrarily detained. Among the detainees are academics, writers, engineers, and relatives of Uighurs in exile. Many Uighurs abroad worry for their missing family members, who they haven’t heard from for several months and, in some cases, over a year.
Although relatively little news gets out of Xinjiang to the rest of the world, we’ve known for over a year that China has been testing facial-recognition tracking and alert systems across Xinjiang and mandating the collection of biometric data—including DNA samples, voice samples, fingerprints, and iris scans—from all residents between the ages of 12 and 65. Reports from the province in 2016 indicated that Xinjiang residents can be questioned over the use of mobile and Internet tools; just having WhatsApp or Skype installed on your phone is classified as “subversive behavior.” Since 2017, the authorities have instructed all Xinjiang mobile phone users to install a spyware app in order to “prevent [them] from accessing terrorist information.”
The prevailing evidence of mass detention centers and newly-erected surveillance systems shows that China has been pouring billions of dollars into physical and digital means of pervasive surveillance in Xinjiang and other regions. But it’s often unclear to what extent these projects operate as real, functional high-tech surveillance, and how much they are primarily intended as a sort of “security theater”: a public display of oppression and control to intimidate and silence dissent.
Now, this security leak shows just how extensively China is tracking its Xinjiang residents: how parts of that system work, and what parts don’t. It demonstrates that the surveillance is real, even as it raises questions about the competence of its operators.
A Brief Window into China’s Digital Police State
Earlier this month, Gevers discovered an insecure MongoDB database filled with records tracking the location and personal information of 2.6 million people located in the Xinjiang Uyghur Autonomous Region. The records include individuals’ national ID number, ethnicity, nationality, phone number, date of birth, home address, employer, and photos.
Over a period of 24 hours, 6.7 million individual GPS coordinates were streamed to and collected by the database, linking individuals to various public camera streams and identification checkpoints associated with location tags such as “hotel,” “mosque,” and “police station.” The GPS coordinates were all located within Xinjiang.
This database is owned by the company SenseNets, a private AI company advertising facial recognition and crowd analysis technologies.
A couple of days later, Gevers reported a second open database tracking the movement of millions of cars and pedestrians. Violations like jaywalking, speeding, and going through a red-light are detected, trigger the camera to take a photo, and ping a WeChat API, presumably to try and tie the event to an identity.
Database Exposed to Anyone with an Internet Connection for Half a Year
China may have a working surveillance program in Xinjiang, but it’s a shockingly insecure security state. Anyone with an Internet connection had access to this massive honeypot of information.
Gevers also found evidence that these servers were previously accessed by other known global entities such as a Bitcoin ransomware actor, who had left behind entries in the database. To top it off, this server was also vulnerable to several known exploits.
In addition to this particular surveillance database, a Chinese cybersecurity firm revealed that at least 468 MongoDB servers had been exposed to the public Internet after Gevers and other security researchers started reporting them. Among these instances: databases containing detailed information about remote access consoles owned by China General Nuclear Power Group, and GPS coordinates of bike rentals.
A Model Surveillance State for China
China, like many other state actors, may simply be willing to tolerate sloppy engineering if its private contractors can reasonably claim to be delivering the goods. Last year, the government spent an extra $3 billion on security-related construction in Xinjiang, and the New York Times reported that China’s police planned to spend an additional $30 billion on surveillance in the future. Even poorly-executed surveillance is massively expensive, and Beijing is no doubt telling the people of Xinjiang that these investments are being made in the name of their own security. But the truth, revealed only through security failures and careful security research, tells a different story: China’s leaders seem to care little for the privacy, or the freedom, of millions of its citizens.