The Linkielist

Linking ideas with the world

The Linkielist

Creepy Chinese AI shames CEO for jaywalking on public displays throughout city – but detected the CEO on an ad on a bus

Dong Mingzhu, chairwoman of China’s biggest maker of air conditioners Gree Electric Appliances, who found her face splashed on a huge screen erected along a street in the port city of Ningbo that displays images of people caught jaywalking by surveillance cameras.

That artificial intelligence-backed surveillance system, however, erred in capturing Dong’s image on Wednesday from an advertisement on the side of a moving bus.

The traffic police in Ningbo, a city in the eastern coastal province of Zhejiang, were quick to recognise the mistake, writing in a post on microblog Sina Weibo on Wednesday that it had deleted the snapshot. It also said the surveillance system would be completely upgraded to cut incidents of false recognition in future.

[…]

Since last year, many cities across China have cracked down on jaywalking by investing in facial recognition systems and advanced AI-powered surveillance cameras. Jaywalkers are identified and shamed by displaying their photographs on large public screens.

First-tier cities like Beijing and Shanghai were among the first to employ those systems to help regulate traffic and identify drivers who violate road rules, while Shenzhen traffic police began displaying photos of jaywalkers on large screens at major intersections from April last year.

Source: Facial recognition snares China’s air con queen Dong Mingzhu for jaywalking, but it’s not what it seems | South China Morning Post

Be Warned: Customer Service Agents Can See What You’re Typing in Real Time on their website forms

Next time you’re chatting with a customer service agent online, be warned that the person on the other side of your conversation might see what you’re typing in real time. A reader sent us the following transcript from a conversation he had with a mattress company after the agent responded to a message he hadn’t sent yet.

Something similar recently happened to HmmDaily’s Tom Scocca. He got a detailed answer from an agent one second after he hit send.

Googling led Scocca to a live chat service that offers a feature it calls “real-time typing view” to allow agents to have their “answers prepared before the customer submits his questions.” Another live chat service, which lists McDonalds, Ikea, and Paypal as its customers, calls the same feature “message sneak peek,” saying it will allow you to “see what the visitor is typing in before they send it over.” Salesforce Live Agent also offers “sneak peak.”

On the upside, you get fast answers. On the downside, your thought process is being unknowingly observed. For the creators, this is technological magic, a deception that will result, they hope, in amazement and satisfaction. But once revealed by an agent who responds too quickly or one who responds before the question is asked, the trick falls apart, and what is left behind feels distinctly creepy, like a rabbit pulled from a hat with a broken neck. “Why give [customers] a fake ‘Send message’ button while secretly transmitting their messages all along?” asks Scocca.

This particular magic trick happens thanks to JavaScript operating in your browser and detecting what’s happening on a particular site in real time. It’s also how companies capture information you’ve entered into web forms before you’ve hit submit. Companies could lessen the creepiness by telling people their typing is seen in real time or could eliminate the send button altogether (but that would undoubtedly confuse people, as if the useless buttons in elevators to “close door” or the placebos to push at crosswalks disappeared overnight.).

Lest you think unexpected monitoring is limited to your digital interactions, know that you should be paranoid during telephone chats too. As the New York Times reported over a decade ago, during those calls where you are reassured of “being recorded for quality assurance purposes,” your conversation while on hold is recorded. So even if there is music playing, monitors may later listen to you fight with your spouse, sing a song, or swear about the agent you’re talking to.

Source: Be Warned: Customer Service Agents Can See What You’re Typing in Real Time

US told to quit sharing data with human rights-violating surveillance regime. Which one, you ask? That’d be the UK

UK authorities should not be granted access to data held by American companies because British laws don’t meet human rights obligations, nine nonprofits have said.

In a letter to the US Department of Justice, organisations including Human Rights Watch and the Electronic Frontier Foundation set out their concerns about the UK’s surveillance and data retention regimes.

They argue that the nation doesn’t adhere to human rights obligations and commitments, and therefore it should not be allowed to request data from US companies under the CLOUD Act, which Congress slipped into the Omnibus Spending Bill earlier this year.

The law allows US government to sign formal, bilateral agreements with other countries setting standards for cross-border investigative requests for digital evidence related to serious crime and terrorism.

It requires that these countries “adhere to applicable international human rights obligations and commitments or demonstrate respect for international universal human rights”. The civil rights groups say the UK fails to make the grade.

As such, it urged the US administration not to sign an executive order allowing the UK to request access to data, communications content and associated metadata, noting that the CLOUD Act “implicitly acknowledges” some of the info gathered might relate to US folk.

Critics are concerned this could then be shared with US law enforcement, thus breaking the Fourth Amendment, which requires a warrant to be served for the collection of such data.

Setting out the areas in which the UK falls short, the letter pointed to pending laws on counter-terrorism, saying that, as drafted they would “excessively restrict freedom of expression by criminalizing clicking on certain types of online content”.

Source: US told to quit sharing data with human rights-violating surveillance regime. Which one, you ask? That’d be the UK • The Register

When the Internet Archive Forgets

When the Internet Archive Forgets

On the internet, there are certain institutions we have come to rely on daily to keep truth from becoming nebulous or elastic. Not necessarily in the way that something stupid like Verrit aspired to, but at least in confirming that you aren’t losing your mind, that an old post or article you remember reading did, in fact, actually exist. It can be as fleeting as using Google Cache to grab a quickly deleted tweet, but it can also be as involved as doing a deep dive of a now-dead site’s archive via the Wayback Machine. But what happens when an archive becomes less reliable, and arguably has legitimate reasons to bow to pressure and remove controversial archived material?

A few weeks ago, while recording my podcast, the topic turned to the old blog written by The Ultimate Warrior, the late bodybuilder turned chiropractic student turned pro wrestler turned ranting conservative political speaker under his legal name of, yes, “Warrior.” As described by Deadspin’s Barry Petchesky in the aftermath of Warrior’s 2014 passing, he was “an insane dick,” spouting off in blogs and campus speeches about people with disabilities, gay people, New Orleans residents, and many others. But when I went looking for a specific blog post, I saw that the blogs were not just removed, the site itself was no longer in the Internet Archive, replaced by the error message: “This URL has been excluded from the Wayback Machine.”

Apparently, Warrior’s site had been de-archived for months, not long after Rob Rousseau pored over it for a Vice Sports article on the hypocrisy of WWE using Warrior’s image for their Breast Cancer Awareness Month campaign. The campaign was all about getting women to “Unleash Your Warrior,” complete with an Ultimate Warrior motif, but since Warrior’s blogs included wishing death on a cancer-survivor, this wasn’t a good look. Rousseau was struck by how the archive was removed “almost immediately after my piece went up, like within that week,” he told Gizmodo.

Rousseau suspected that WWE was somehow behind it, but a WWE spokesman told Gizmodo that they were not involved. Steve Wilton, the business manager for Ultimate Creations also denied involvement. A spokesman for the Internet Archive, though, told Gizmodo that the archive was removed because of a DMCA takedown request from the company’s business manager (Wilton’s job for years) on October 29, 2017, two days after the Vice article was published. (He has not replied to a follow-up email about the takedown request.)

Over the last few years, there has been a change in how the Wayback Machine is viewed, one inspired by the general political mood. What had long been a useful tool when you came across broken links online is now, more than ever before, seen as an arbiter of the truth and a bulwark against erasing history.

That archive sites are trusted to show the digital trail and origin of content is not just a must-use tool for journalists, but effective for just about anyone trying to track down vanishing web pages. With that in mind, that the Internet Archive doesn’t really fight takedown requests becomes a problem. That’s not the only recourse: When a site admin elects to block the Wayback crawler using a robots.txt file, the crawling doesn’t just stop. Instead, the Wayback Machine’s entire history of a given site is removed from public view.

In other words, if you deal in a certain bottom-dwelling brand of controversial content and want to avoid accountability, there are at least two different, standardized ways of erasing it from the most reliable third-party web archive on the public internet.

For the Internet Archive, like with quickly complying with takedown notices challenging their seemingly fair use archive copies of old websites, the robots.txt strategy, in practice, does little more than mitigating their risk while going against the spirit of the protocol. And if someone were to sue over non-compliance with a DMCA takedown request, even with a ready-made, valid defense in the Archive’s pocket, copyright litigation is still incredibly expensive. It doesn’t matter that the use is not really a violation by any metric. If a rightsholder makes the effort, you still have to defend the lawsuit.

“The fair use defense in this context has never been litigated,” noted Annemarie Bridy, a law professor at the University of Idaho and an Affiliate Scholar at the Center for Internet and Society at Stanford Law School. “Internet Archive is a non-profit, so the exposure to statutory damages that they face is huge, and the risk that they run is pretty great … given the scope of what they do; that they’re basically archiving everything that is on the public web, their exposure is phenomenal. So you can understand why their impulse might be to act cautiously even if that creates serious tension with their core mission, which is to create an accurate historical archive of everything that has been there and to prevent people from wiping out evidence of their history.”

While the Internet Archive did not respond to specific questions about its robots.txt policy, its proactive response to takedown requests, or if any potential fair use defenses have been tested by them in court, a spokesperson did send this statement along:

Several months after the Wayback Machine was launched in late 2001, we participated with a group of outside archivists, librarians, and attorneys in the drafting of a set of recommendations for managing removal requests (the Oakland Archive Policy) that the Internet Archive more or less adopted as guidelines over the first decade or so of the Wayback Machine.

Earlier this year, we convened with a similar group to review those guidelines and explore the potential value of an updated version. We are still pondering many issues and hope that before too long we might be able to present some updated information on our site to better help the public understand how we approach take down requests. You can find some of our thoughts about robots.txt at http://blog.archive.org/2017/04/17/robots-txt-meant-for-search-engines-dont-work-well-for-web-archives/.

At the end of the day, we strive to strike a balance between the concerns that site owners and rights holders sometimes bring to us with the broader public interest in free access for everyone to a history of the Internet that is as comprehensive as possible.

All of that said, the Internet Archive has always held itself out to be a library; in theory, shouldn’t that matter?

“Under current copyright law, although there are special provisions that give certain rights to libraries, there is no definition of a library,” explained Brandon Butler, the Director of Information Policy for the University of Virginia Library. “And that’s a thing that rights holders have always fretted over, and they’ve always fretted over entities like the Internet Archive, which aren’t 200-year-old public libraries, or university-affiliated libraries. They often raise up a stand that there will be faux libraries, that they’d call themselves libraries but it’s really just a haven for piracy. That specter of the sort of sham library really hasn’t arisen.” The lone exception that Butler could think of was when American Buddha, a non-profit, online library of Buddhist texts, found itself sued by Penguin over a few items that they asserted copyright over. “The court didn’t really care that this place called itself a library; it didn’t really shield them from any infringement allegations.” That said, as Butler notes, while being a library wouldn’t necessarily protect the Internet Archive as much as it could, “the right to make copies for preservation,” as Butler puts it, is definitely a point in their favor.

That said, “libraries typically don’t get sued; it’s bad PR,” Butler says. So it’s not like there’s a ton of modern legal precedent about libraries in the digital age, barring some outliers like the various Google Books cases.

As Bridy notes, in the United States, copyright is “a commercial right.” It’s not about reputational harm, it’s about protecting the value of a work and, more specifically, the ability to continuously make money off of it. “The reason we give it is we want artists and creative people to have an incentive to publish and market their work,” she said. “Using copyright as a way of trying to control privacy or reputation … it can be used that way, but you might argue that’s copyright misuse, you might argue it falls outside of the ambit of why we have copyright.”

We take a lot of things for granted, especially as we rely on technology more and more. “The internet is forever” may be a common refrain in the media, and the underlying wisdom about being careful may be sound, but it is also not something that should be taken literally. People delete posts. Websites and entire platforms disappear for business and other reasons. Rich, famous, and powerful bad actors don’t care about intimidating small non-profit organizations. It’s nice to have safeguards, but there are limits to permanence on the internet, and where there are limits, there are loopholes.

Source: When the Internet Archive Forgets

Another thing seriously broken with copyright

In China, your car could be talking to the government

When Shan Junhua bought his white Tesla Model X, he knew it was a fast, beautiful car. What he didn’t know is that Tesla constantly sends information about the precise location of his car to the Chinese government.

Tesla is not alone. China has called upon all electric vehicle manufacturers in China to make the same kind of reports — potentially adding to the rich kit of surveillance tools available to the Chinese government as President Xi Jinping steps up the use of technology to track Chinese citizens.

“I didn’t know this,” said Shan. “Tesla could have it, but why do they transmit it to the government? Because this is about privacy.”

More than 200 manufacturers, including Tesla, Volkswagen, BMW, Daimler, Ford, General Motors, Nissan, Mitsubishi and U.S.-listed electric vehicle start-up NIO, transmit position information and dozens of other data points to government-backed monitoring centers, The Associated Press has found. Generally, it happens without car owners’ knowledge.

The automakers say they are merely complying with local laws, which apply only to alternative energy vehicles. Chinese officials say the data is used for analytics to improve public safety, facilitate industrial development and infrastructure planning, and to prevent fraud in subsidy programs.

China has ordered electric car makers to share real-time driving data with the government. The country says it’s to ensure safety and improve the infrastructure, but critics worry the tracking can be put to more nefarious uses. (Nov. 29)

But other countries that are major markets for electronic vehicles — the United States, Japan, across Europe — do not collect this kind of real-time data.

And critics say the information collected in China is beyond what is needed to meet the country’s stated goals. It could be used not only to undermine foreign carmakers’ competitive position, but also for surveillance — particularly in China, where there are few protections on personal privacy. Under the leadership of Xi Jinping, China has unleashed a war on dissent, marshalling big data and artificial intelligence to create a more perfect kind of policing, capable of predicting and eliminating perceived threats to the stability of the ruling Communist Party.

There is also concern about the precedent these rules set for sharing data from next-generation connected cars, which may soon transmit even more personal information.

Source: In China, your car could be talking to the government

Companies ‘can sack workers for refusing to use fingerprint scanners’ in Australia

Businesses using fingerprint scanners to monitor their workforce can legally sack employees who refuse to hand over biometric information on privacy grounds, the Fair Work Commission has ruled.

The ruling, which will be appealed, was made in the case of Jeremy Lee, a Queensland sawmill worker who refused to comply with a new fingerprint scanning policy introduced at his work in Imbil, north of the Sunshine Coast, late last year.

Fingerprint scanning was used to monitor the clock-on and clock-off times of about 150 sawmill workers at two sites and was preferred to swipe cards because it prevented workers from fraudulently signing in on behalf of their colleagues to mask absences.

The company, Superior Woods, had no privacy policy covering workers and failed to comply with a requirement to properly notify individuals about how and why their data was being collected and used. The biometric data was stored on servers located off-site, in space leased from a third party.

Lee argued the business had never sought its workers’ consent to use fingerprint scanning, and feared his biometric data would be accessed by unknown groups and individuals.

“I am unwilling to consent to have my fingerprints scanned because I regard my biometric data as personal and private,” Lee wrote to his employer last November.

“Information technology companies gather as much information/data on people as they can.

“Whether they admit to it or not. (See Edward Snowden) Such information is used as currency between corporations.”

Lee was neither antagonistic or belligerent in his refusals, according to evidence before the commission. He simply declined to have his fingerprints scanned and continued using a physical sign-in booklet to record his attendance.

He had not missed a shift in more than three years.

The employer warned him about his stance repeatedly, and claimed the fingerprint scanner did not actually record a fingerprint, but rather “a set of data measurements which is processed via an algorithm”. The employer told Lee there was no way the data could be “converted or used as a finger print”, and would only be used to link to his payroll number to his clock-on and clock-off time. It said the fingerprint scanners were also needed for workplace safety, to accurately identify which workers were on site in the event of an accident.

Lee was given a final warning in January, and responded that he valued his job a “great deal” and wanted to find an alternative way to record his attendance.

“I would love to continue to work for Superior Wood as it is a good, reliable place to work,” he wrote to his employer. “However, I do not consent to my biometric data being taken. The reason for writing this letter is to impress upon you that I am in earnest and hope there is a way we can negotiate a satisfactory outcome.”

Lee was sacked in February, and lodged an unfair dismissal claim in the Fair Work Commission.

Source: Companies ‘can sack workers for refusing to use fingerprint scanners’ | World news | The Guardian

You only have one set of fingerprints – that’s the problem with biometrics: they can’t be changed, so you really really don’t want them stolen from you

Google wants to spy on you and then report on you

suggesting, automatically implementing, or both suggesting and automatically implementing, one or more household policies to be implemented within a household environment. The household policies include one or more input criteria that is derivable from at least one smart device within the household environment, the one or more input criteria relating to a characteristic of the household environment, a characteristic of one or more occupants of the household, or both. The household policies also include one or more outputs to be provided based upon the one or more input criteria.

https://patents.justia.com/patent/10114351

Source: Patent Images

Eg.page 16, figure 25 – monitor TV watching patterns and report on you

page 20, figure 33 – detect time brushing teeth and report on you

Do you trust Google to be your parent?!

Google patents a way for your smart devices to spy on you, serve you ads, even if your privacy settings says no

In some embodiments, the private network may include at least one first device that captures information about its surrounding environment, such as data about the people and/or objects in the environment. The first device may receive a set of potential content sent from a server external to the private network. The first device may select at least one piece of content to present from the set of potential content based in part on the people/object data and/or a score assigned by the server to each piece of content. The private network may also include at least one second device that receives the captured people/object data sent from the first device. The second device may also receive a set of potential content sent from the server external to the private network. The second device may select at least one piece of content to present from the set of potential content based in part on the people/object data sent from the first device and/or a score assigned by the server to each piece of content. Using the private network to communicate the people/object data between devices may preserve the privacy of the user since the data is not sent to the external server. Further, using the obtained people/object data to select content enables more personalized content to be chosen.

[…]

 

  • urther, although not shown in this particular way, in some embodiments, the client device 134 may collect people/object data 136 using one or more sensors, as discussed above. Also, as previously discussed, the raw people/object data 136 may be processed by the sensing device 138, the client device 134, and/or a processing device 140 depending on the implementation. The people/object data 136 may include the data described above regarding FIG. 7 that may aid in recognizing objects, people, and/or patterns, as well as determining user preferences, mood, and so forth.
  • [0144]
    After the client device 134 is in possession of the people/object data 136, the client device 134 may use the classifier 144 to score each piece of content 132. In some embodiments, the classifier 144 may combine at least the people/object data 136, the scores provided by the server 67 for the content 132, or both, to determine a final score for each piece of content 132 (process block 216), which will be discussed in more detail below.
  • [0145]
    The client device 134 may select at least one piece of content 132 to display based on the scores (process block 218). That is, the client device 134 may select the content 132 with the highest score as determined by the classifier 144 to display. However, in some embodiments, where none of the content 132 generate a score above a threshold amount, no content 132 may be selected. In those embodiments, the client device 134 may not present any content 132. However, when at least one item of content 132 scores above the threshold amount and is selected, then the client device 134 may communicate the selected content 132 to a user of the client device 134 (process block 220) and track user interaction with the content 132 (process block 222). It should be noted that when more than one item of content 132 score above the threshold amount, then the item of content 132 with the highest score may be selected. The client device 134 may use the tracked user interaction and conversions to continuously train the classifier 144 to ensure that the classifier 144 stays up to date with the latest user preferences.
  • [0146]
    It should be noted that, in some embodiments, the processing device 140 may receive the content 132 from the server 67 instead of, or in addition to, the client device 134. In embodiments where the processing device 140 receives the content 132, the processing device 140 may perform the classification of the content 132 using a classifier 144 similar to the client device 134 and the processing device 140 may select the content 132 with the highest score as determined by the classifier 144. Once selected, the processing device 140 may send the selected content 132 to the client device 134, which may communicate the selected content 132 to a user.
  • […]
  • The process 230 may include training one or more models of the classifier 144 with people/object data 136, locale 146, demographics 148, search history 150, scores from the server 67, labels 145, and so forth. As previously discussed, the classifier 144 may include a support vector machine (SVM) that uses supervised learning models to classify the content 132 into one of two groups (e.g., binary classification) based on recognized patterns using the people/object data 136, locale 146, demographics 148, search history 150, scores from the server 67, and the labels 145 for the two groups of “show” or “don’t show.”

 

Source: US20160260135A1 – Privacy-aware personalized content for the smart home – Google Patents

 

They have thought up around 140 ways that this can be used…

Dutch Gov sees Office 365 spying on you, sending your texts to US servers without recourse or knowledge

Uit het rapport van de Nederlandse overheid blijkt dat de telemetrie-functie van alle Office 365 en Office ProPlus-applicaties onder andere e-mail-onderwerpen en woorden/zinnen die met behulp van de spellingschecker of vertaalfunctie zijn geschreven worden doorgestuurd naar systemen in de Verenigde Staten.

Dit gaat zelfs zo ver dat, als een gebruiker meerdere keren achter elkaar op de backspace-knop drukt, de telemetrie-functie zowel de zin voor de aanpassing al die daarna verzamelt en doorstuurt. Gebruikers worden hiervan niet op de hoogte gebracht en hebben geen mogelijkheid deze dataverzameling te stoppen of de verzamelde data in te zien.

De Rijksoverheid heeft dit onderzoek gedaan in samenwerking met Privacy Company. “Microsoft mag deze tijdelijke, functionele gegevens niet opslaan, tenzij de bewaring strikt noodzakelijk is, bijvoorbeeld voor veiligheidsdoeleinden,” schrijft Sjoera Nas van de Privacy Company in een blogpost.

Source: Je wordt bespied door Office 365-applicaties – Webwereld

Facebook files patent to find out more about you by looking at the background items in your pictures and pictures you are tagged in

An online system predicts household features of a user, e.g., household size and demographic composition, based on image data of the user, e.g., profile photos, photos posted by the user and photos posted by other users socially connected with the user, and textual data in the user’s profile that suggests relationships among individuals shown in the image data of the user. The online system applies one or more models trained using deep learning techniques to generate the predictions. For example, a trained image analysis model identifies each individual depicted in the photos of the user; a trained text analysis model derive household member relationship information from the user’s profile data and tags associated with the photos. The online system uses the predictions to build more information about the user and his/her household in the online system, and provide improved and targeted content delivery to the user and the user’s household.

Source: United States Patent Application: 0180332140

Microsoft slips ads into Windows 10 Mail client – then U-turns so hard, it warps fabric of reality – Windows is an OS, not a service!

Microsoft was, and maybe still is, considering injecting targeted adverts into the Windows 10 Mail app.

The ads would appear at the top of inboxes of folks using the client without a paid-for Office 365 subscription, and the advertising would be tailored to their interests. Revenues from the banners were hoped to help keep Microsoft afloat, which banked just $16bn in profit in its latest financial year.

According to Aggiornamenti Lumia on Friday, folks using Windows Insider fast-track builds of Mail and Calendar, specifically version 11605.11029.20059.0, may have seen the ads in among their messages, depending on their location. Users in Brazil, Canada, Australia, and India were chosen as guinea pigs for this experiment.

A now-deleted FAQ on the Office.com website about the “feature” explained the advertising space would be sold off to help Microsoft “provide, support, and improve some of our products,” just like Gmail and Yahoo! Mail display ads.

Also, the advertising is targeted, by monitoring what you get up to with apps and web browsing, and using demographic information you disclose:

Windows generates a unique advertising ID for each user on a device. When the advertising ID is enabled, both Microsoft apps and third-party apps can access and use the advertising ID in much the same way that websites can access and use a unique identifier stored in a cookie. Mail uses this ID to provide more relevant advertising to you.

You have full control of Windows and Mail having access to this information and can turn off interest-based advertising at any time. If you turn off interest-based advertising, you will still see ads but they will no longer be as relevant to your interests.

Microsoft does not use your personal information, like the content of your email, calendar, or contacts, to target you for ads. We do not use the content in your mailbox or in the Mail app.

You can also close an ad banner by clicking on its trash can icon, or get rid of them completely by coughing up cash:

You can permanently remove ads by buying an Office 365 Home or Office 365 Personal subscription.

Here’s where reality is thrown into a spin, literally. Microsoft PR supremo Frank Shaw said a few hours ago, after the ads were spotted:

This was an experimental feature that was never intended to be tested broadly and it is being turned off.

Never intended to be tested broadly, and was shut down immediately, yet until it was clocked, had an official FAQ for it on Office.com, which was also hastily nuked from orbit, and was rolled out in highly populated nations. Talk about hand caught in the cookie jar.

Source: Microsoft slips ads into Windows 10 Mail client – then U-turns so hard, it warps fabric of reality • The Register

Couple Who Ran retro ROM Site (with games you can’t buy any more) to Pay Nintendo $12 Million

Nintendo has won a lawsuit seeking to take two large retro-game ROM sites offline, on charges of copyright infringement. The judgement, made public today, ruled in Nintendo’s favour and states that the owners of the sites LoveROMS.com and LoveRETRO.co, will have to pay a total settlement of $12 million to Nintendo. The complaint was originally filed by the company in an Arizona federal court in July, and has since lead to a swift purge of self-censorship by popular retro and emulator ROM sites, who have feared they may be sued by Nintendo as well.

LoveROMS.com and LoveRETRO.co were the joint property of couple Jacob and Cristian Mathias, before Nintendo sued them for what they have called “brazen and mass-scale infringement of Nintendo’s intellectual property rights.” The suit never went to court; instead, the couple sought to settle after accepting the charge of direct and indirect copyright infringement. TorrentFreak reports that a permanent injunction, prohibiting them from using, sharing, or distributing Nintendo ROMs or other materials again in the future, has been included in the settlement. Additionally all games, game files, and emulators previously on the site and in their custody must be handed over to the Japanese game developer, along with a $12.23 million settlement figure. It is unlikely, as TorrentFreak have reported, that the couple will be obligated to pay the full figure; a smaller settlement has likely been negotiated in private.

Instead, the purpose of the enormous settlement amount is to act as a warning or deterrent to other ROM and emulator sites surviving on the internet. And it’s working.

Motherboard previously reported on the way in which Nintendo’s legal crusade against retro ROM and emulator sites is swiftly eroding a large chunk of retro gaming. The impact of this campaign on video games as a whole is potentially catastrophic. Not all games have been preserved adequately by game publishers and developers. Some are locked down to specific regions and haven’t ever been widely accessible.

The accessibility of video games and the gaming industry has always been defined and limited by economic boundaries. There are a multitude of reasons why retro games can’t be easily or reliably accessed by prospective players, and by wiping out ROM sites Nintendo is erasing huge chunks of gaming history. Limiting the accessibility of old retro titles to this extent will undoubtedly affect the future of video games, with classic titles that shaped modern games and gaming development being kept under lock and key by the monolithic hand of powerful game developers.

Since the filing of the suit in July EmuParadise, a haven for retro games and emulator titles, has shut down. Many other sites have followed suit.

Source: Couple Who Ran ROM Site to Pay Nintendo $12 Million – Motherboard

Wow, that’s a sure fire way to piss off your fans, Nintendo!

How to Quit Google Completely

Despite all the convenience and quality of Google’s sprawling ecosystem, some users are fed up with the fishy privacy policies the company has recently implemented in Gmail, Chrome, and other services. To its credit, Google has made good changes in response to user feedback, but that doesn’t diminish the company’s looming shadow over the internet at large. If you’re ready to ditch Google, or even just reduce its presence in your digital life, this guide is here to help.

Since Google owns some of the best and most-used apps, websites, and internet services, making a clean break is difficult—but not impossible. We’re going to take a look at how to leave the most popular Google services behind, and how to keep Google from tracking your data. We’ve also spent serious time researching and testing great alternatives to Google’s offerings, so you can leave the Big G without having to buy new devices or swear fealty to another major corporation.

Source: How to Quit Google Completely

Chinese ‘gait recognition’ tech IDs people by how they walk

Chinese authorities have begun deploying a new surveillance tool: “gait recognition” software that uses people’s body shapes and how they walk to identify them, even when their faces are hidden from cameras.

Already used by police on the streets of Beijing and Shanghai, “gait recognition” is part of a push across China to develop artificial-intelligence and data-driven surveillance that is raising concern about how far the technology will go.

Huang Yongzhen, the CEO of Watrix, said that its system can identify people from up to 50 meters (165 feet) away, even with their back turned or face covered. This can fill a gap in facial recognition, which needs close-up, high-resolution images of a person’s face to work.

“You don’t need people’s cooperation for us to be able to recognize their identity,” Huang said in an interview in his Beijing office. “Gait analysis can’t be fooled by simply limping, walking with splayed feet or hunching over, because we’re analyzing all the features of an entire body.”

Watrix announced last month that it had raised 100 million yuan ($14.5 million) to accelerate the development and sale of its gait recognition technology, according to Chinese media reports.

Chinese police are using facial recognition to identify people in crowds and nab jaywalkers, and are developing an integrated national system of surveillance camera data. Not everyone is comfortable with gait recognition’s use.

Source: Chinese ‘gait recognition’ tech IDs people by how they walk

U.S. Indicts Chinese Hacker-Spies in Conspiracy to Steal Aerospace Secrets

The U.S. Justice Department has charged two Chinese intelligence officers, six hackers, and two aerospace company insiders in a sweeping conspiracy to steal confidential aerospace technology from U.S. and French companies.

For more than five years, two Chinese Ministry of State Security (MSS) spies are said to have run a team of hackers focusing on the theft of designs for a turbofan engine used in U.S. and European commercial airliners, according to an unsealed indictment (below) dated October 25. In a statement, the DOJ said a Chinese state-owned aerospace company was simultaneously working to develop a comparable engine.

“The threat posed by Chinese government-sponsored hacking activity is real and relentless,” FBI Special Agent in Charge John Brown of San Diego said in a statement. “Today, the Federal Bureau of Investigation, with the assistance of our private sector, international and U.S. government partners, is sending a strong message to the Chinese government and other foreign governments involved in hacking activities.”

The MSS officers involved were identified as Zha Rong, a division director in the Jiangsu Province regional department (JSSD), and Chai Meng, a JSSD section chief.

At the direction of the MSS officers, the hackers allegedly infiltrated a number of U.S. aerospace companies, including California-based Capstone Turbine, among others in Arizona, Massachusetts, and Oregon, the DOJ said. The officers are also said to have recruited at least two Chinese employees of a French aerospace manufacturer—insiders who allegedly aided the conspiracy by, among other criminal acts, installing the remote access trojan Sakula onto company computers.

Source: U.S. Indicts Chinese Hacker-Spies in Conspiracy to Steal Aerospace Secrets

Now Apps Can Track You Even After You Uninstall Them

If it seems as though the app you deleted last week is suddenly popping up everywhere, it may not be mere coincidence. Companies that cater to app makers have found ways to game both iOS and Android, enabling them to figure out which users have uninstalled a given piece of software lately—and making it easy to pelt the departed with ads aimed at winning them back.

Adjust, AppsFlyer, MoEngage, Localytics, and CleverTap are among the companies that offer uninstall trackers, usually as part of a broader set of developer tools. Their customers include T-Mobile US, Spotify Technology, and Yelp. (And Bloomberg Businessweek parent Bloomberg LP, which uses Localytics.) Critics say they’re a fresh reason to reassess online privacy rights and limit what companies can do with user data. “Most tech companies are not giving people nuanced privacy choices, if they give them choices at all,” says Jeremy Gillula, tech policy director at the Electronic Frontier Foundation, a privacy advocate.

Some providers say these tracking tools are meant to measure user reaction to app updates and other changes. Jude McColgan, chief executive officer of Boston’s Localytics, says he hasn’t seen clients use the technology to target former users with ads. Ehren Maedge, vice president for marketing and sales at MoEngage Inc. in San Francisco, says it’s up to the app makers not to do so. “The dialogue is between our customers and their end users,” he says. “If they violate users’ trust, it’s not going to go well for them.” Adjust, AppsFlyer, and CleverTap didn’t respond to requests for comment, nor did T-Mobile, Spotify, or Yelp.

Uninstall tracking exploits a core element of Apple Inc.’s and Google’s mobile operating systems: push notifications. Developers have always been able to use so-called silent push notifications to ping installed apps at regular intervals without alerting the user—to refresh an inbox or social media feed while the app is running in the background, for example. But if the app doesn’t ping the developer back, the app is logged as uninstalled, and the uninstall tracking tools add those changes to the file associated with the given mobile device’s unique advertising ID, details that make it easy to identify just who’s holding the phone and advertise the app to them wherever they go.

The tools violate Apple and Google policies against using silent push notifications to build advertising audiences, says Alex Austin, CEO of Branch Metrics Inc., which makes software for developers but chose not to create an uninstall tracker. “It’s just generally sketchy to track people around the internet after they’ve opted out of using your product,” he says, adding that he expects Apple and Google to crack down on the practice soon. Apple and Google didn’t respond to requests for comment.

Source: Now Apps Can Track You Even After You Uninstall Them – Bloomberg

Oxford study claims data harvesting among Android apps is “out of control”

It’s no secret that mobile apps harvest user data and share it with other companies, but the true extent of this practice may come as a surprise. In a new study carried out by researchers from Oxford University, it’s revealed that almost 90 percent of free apps on the Google Play store share data with Alphabet.

The researchers, who analyzed 959,000 apps from the US and UK Google Play stores, said data harvesting and sharing by mobile apps was now “out of control.”

“We find that most apps contain third party tracking, and the distribution of trackers is long-tailed with several highly dominant trackers accounting for a large portion of the coverage,” reads the report.

It’s revealed that most of the apps, 88.4 percent, could share data with companies owned by Google parent Alphabet. Next came a firm that’s no stranger to data sharing controversies, Facebook (42.5 percent), followed by Twitter (33.8 percent), Verizon (26.27 percent), Microsoft (22.75 percent), and Amazon (17.91 percent).

According to The Financial Times, which first reported the research, information shared by these third-party apps can include age, gender, location, and information about a user’s other installed apps. The data “enables construction of detailed profiles about individuals, which could include inferences about shopping habits, socio-economic class or likely political opinions.”

Big firms then use the data for a variety of purposes, such as credit scoring and for targeting political messages, but its main use is often ad targeting. Not surprising, given that revenue from online advertising is now over $59 billion per year.

According to the research, the average app transfers data to five tracker companies, which pass the data on to larger firms. The biggest culprits are news apps and those aimed at children, both of which tend to have the most third-party trackers associated with them.

Source: New study claims data harvesting among Android apps is “out of control” – TechSpot

SIM Cards That Force Your Mobile Data Through Tor Are Coming

It’s increasingly difficult to expect privacy when you’re browsing online, so a non-profit in the UK is working to build the power of Tor’s anonymity network right into the heart of your smartphone.

Brass Horn Communications is experimenting with all sorts of ways to improve Tor’s usability for UK residents. The Tor browser bundle for PCs can help shield your IP address from snoopers and data-collection giants. It’s not perfect and people using it for highly-illegal activity can still get caught, but Tor’s system of sending your data through the various nodes on its network to anonymize user activity works for most people. It can help users surf the full web in countries with restrictive firewalls and simply make the average Joe feel like they have more privacy. But it’s prone to user error, especially on mobile devices. Brass Horn hopes to change that.

Brass Horn’s founder, Gareth Llewelyn, told Motherboard his organization is “about sticking a middle finger up to mobile filtering, mass surveillance.” Llewelyn has been unnerved by the UK’s relentless drive to push through legislation that enables surveillance and undermines encryption. Along with his efforts to build out more Tor nodes in the UK to increase its notoriously slow speeds, Llewelyn is now beta-testing a SIM card that will automatically route your data through Tor and save people the trouble of accidentally browsing unprotected.

Currently, mobile users’ primary option is to use the Tor browser that’s still in alpha-release and couple it with software called Orbot to funnel your app activity through the network. Only apps that have a proxy feature, like Twitter, are compatible. It’s also only available for Android users.

You’ll still need Orbot installed on your phone to use Brass Horn’s SIM card and the whole idea is that you won’t be able to get online without running on the Tor network. There’s some minor setup that the organization walks you through and from that point on, you’ll apparently never accidentally find yourself online without the privacy protections that Tor provides.

In an email to Gizmodo, Llewellyn said that he does not recommend using the card on a device with dual-SIMs. He said the whole point of the project is that a user “cannot accidentally send packets via Clearnet, this is to protect one’s privacy, anonymity and/or protect against NITs etc, if one were to use a dual SIM phone it would negate the failsafe and would not be advisable.” But if a user so desired, they could go with a dual-SIM setup.

You’re also unprotected if you end up on WiFi, but in general, this is a way for journalists, activists, and rightly cautious users to know they’re always protected.

The SIM acts as a provider and Brass Horn essentially functions as a mobile virtual network operator that piggybacks on other networks. The site for Brass Horn’s Onion3G service claims it’s a safer mobile provider because it only issues “private IP addresses to remote endpoints which if ‘leaked’ won’t identify you or Brass Horn Communications as your ISP.” It costs £2.00 per month and £0.025 per megabyte transferred over the network.

A spokesperson for the Tor Project told Gizmodo that it hasn’t been involved in this project and that protecting mobile data can be difficult. “This looks like an interesting and creative way to approach that, but it still requires that you put a lot of trust into your mobile provider in ensuring that no leaks happen,” they said.

Info on joining the beta is available here and Brass Horn expects to make its SIM card available to the general public in the UK next year. Most people should wait until there’s some independent research done on the service, but it’s all an intriguing idea that could provide a model for other countries.

Source: SIM Cards That Force Your Mobile Data Through Tor Are Coming

Facebook, Google sued for ‘secretly’ slurping people’s whereabouts – while Feds lap it up

Facebook and Google are being sued in two proposed class-action lawsuits for allegedly deceptively gathering location data on netizens who thought they had opted out of such cyber-stalking.

The legal challenges stem from revelations earlier this year that even after users actively turn off “location history” on their smartphones, their location is still gathered, stored, and exploited to sling adverts.

Both companies use weasel words in their support pages to continue to gather the valuable data while seemingly giving users the option to opt out – and that “deception” is at the heart of both lawsuits.

In the first, Facebook user Brett Heeger claims the antisocial network is misleading folks by providing the option to stop the gathering and storing of their location data but in reality in continues to grab the information and add it to a “Location History” feature that it then uses for targeted advertising.

“Facebook misleads its users by offering them the option to restrict Facebook from tracking, logging and storing their private location information, but then continuing to track, log, and store that location information regardless of users’ choices,” the lawsuit, filed in California, USA, states. “In fact, Facebook secretly tracks, logs and stories location data for all of its users – including those who have sought to limit the information about their locations.”

This action is “deceptive” and offers users a “false sense of security,” the lawsuit alleges. “Facebook’s false assurance are intended to make users feel comfortable continuing to use Facebook and share their personal information so that Facebook can continue to be profitable, at the expense of user privacy… Advertisers pay Facebook to place advertisements because Facebook is so effective at using location information to target advertisement to consumers.”

And over to you, Google

In the second lawsuit, also filed in Cali, three people – Leslie Lee of Wyoming and Colorado residents Stacy Smedley and Fredrick Davis – make the same claim: that Google is deceiving smartphone users by giving them the option to “pause” the gathering of your location data through a setting called “Location History.”

In reality, however, Google continues to gather locations data through its two most popular apps – Search and Maps – even when you actively choose to turn off location data. Instead, users have to go to a separate setting called “Web and App Activity” to really turn the gathering off. There is no mention of location data within that setting and nowhere does Google refer people to that setting in order to really stop location tracking.

As such, Google is engaged in a “deliberate, deceptive practice to collect personal information from which they can generate millions of dollars in revenue by covertly recording contemporaneous location data about Android and iPhone mobile phone users who are using Google Maps or other Google applications and functionalities, but who have specifically opted out of such tracking,” the lawsuit alleges.

Both legal salvos hope to become class-action lawsuits with jury trials, so potentially millions of other affected users will be able to join the action and so propel the case forward. The lawsuits seek compensation and damages as well as injunctions preventing both companies from gathering such data with gaining the explicit consent of users.

Meanwhile at the other end of the scale, the ability for the companies to constantly gather user location data has led to them being targeted by law enforcement in an effort to solve crimes.

Warrant required

Back in June, the US Supreme Court made a landmark ruling about location data, requiring cops and FBI agents to get a warrant before accessing such records from mobile phone operators.

But it is not clear which hurdles or parameters need to be met before a court should sign off on such a warrant, leading to an increasing number of cases where the Feds have provided times, dates, and rough geographic locations and asked Google, Facebook, Snapchat, and others, to provide the data of everyone who was in the vicinity at the time.

This so-called “reverse location” order has many civil liberties groups concerned because it effectively exposes innocent individuals’ personal data to the authorities simply because they were in the same rough area where a crime was carried out.

[…]

Leaky apps

And if all that wasn’t bad enough, this week a paper [PDF] by eggheads at the University of Oxford in the UK who studied the source code of just under one million apps found that Google and Facebook were top of the list when it came to gathering data on users from third parties.

Google parent company Alphabet receives user data from an incredible 88 per cent of apps on the market. Often this information was accumulated through third parties and included information like age, gender and location. The data “enables construction of detailed profiles about individuals, which could include inferences about shopping habits, socio-economic class or likely political opinions,” the paper revealed.

Facebook received data from 43 per cent of the apps, followed by Twitter with 34 per cent. Mobile operator Verizon – renowned for its “super cookie” tracker gets information from 26 per cent of apps; Microsoft 23 per cent; and Amazon 18 per cent.

Source: Facebook, Google sued for ‘secretly’ slurping people’s whereabouts – while Feds lap it up • The Register

Star Wars: KOTOR Fan Remake Shutting Down After Cease And Desist From Lucasfilm

Back in 2016, an ambitious group of fans began work on an Unreal Engine 4 “reboot” of role-playing, light-sabering classic Star Wars: Knights of the Old Republic called Apeiron. The project has made impressive progress since then, but it emitted a tragic Wilhelm scream this week when Lucasfilm lawyers zapped it out of existence.

As is often the case with ambitious fan projects, Apeiron received a cease-and-desist letter from lawyers representing the series its team was trying to pay homage to. Apeiron’s developer, Poem Studios, took to Twitter to share the news. “After a few days, I’ve exhausted my options to keep it [Apeiron] afloat; we knew this day was a possibility. I’m sorry and may the force be with you,” Poem wrote alongside a screenshot of a letter purporting to be from Lucasfilm.

“Notwithstanding Poem Studios affection and enthusiasm for the Star Wars franchise and the original KOTOR game, we must object to any unlicensed use of Lucasfilm intellectual property,” reads Lucasfilm’s letter. It goes on to call Apeiron’s use of Star Wars characters, artwork, and images on its website and social media “infringing” and demands that 1) Star Wars materials are removed, 2) the Apeiron team ceases development and destroys its code, and 3) they don’t use any Lucasfilm properties in future games.

Source: Star Wars: KOTOR Fan Remake Shutting Down After Cease And Desist From Lucasfilm

What a bunch of dicks at Lucasfilm.

EU hijacking: self-driving car data will be copyrighted…by the manufacturer – not to be released by drivers / engineers / researchers / mechanics

Today, the EU held a routine vote on regulations for self-driving cars, when something decidedly out of the ordinary happened…

The autonomous vehicle rules contained a clause that affirmed that “data generated by autonomous transport are automatically generated and are by nature not creative, thus making copyright protection or the right on databases inapplicable.”

This is pretty inoffensive stuff. Copyright protects creative work, not factual data, and the telemetry generated by your car — self-driving or not — is not copyrighted.

But just before the vote, members of the European Peoples’ Party (the same bloc that pushed through the catastrophic new Copyright Directive) stopped the proceedings with a rare “roll call” and voted down the clause.

In other words, they’ve snuck in a space for the telemetry generated by autonomous vehicles to become someone’s property. This is data that we will need to evaluate the safety of autonomous vehicles, to fine-tune their performance, to ensure that they are working as the manufacturer claims — data that will not be public domain (as copyright law dictates), but will instead be someone’s exclusive purview, to release or withhold as they see fit.

Who will own this data? It’s unlikely that it will be the owners of the vehicles. Just look at the data generated by farmers who own John Deere tractors. These tractors create a wealth of soil data, thanks to humidity sensors, location sensors and torque sensors — a centimeter-accurate grid of soil conditions in the farmer’s own field.

But all of that data is confiscated by John Deere, locked up behind the company’s notorious DRM and only made available in fragmentary form to the farmer who generated it (it comes bundled with the app that you get if you buy Monsanto seed) — meanwhile, the John Deere company aggregates the data for sale into the crop futures market.

It’s already the case that most auto manufacturers use license agreements and DRM to lock up your car so that you can’t fix it yourself or take it to an independent service center. The aggregated data from millions of self-driving cars across the EU aren’t just useful to public safety analysts, consumer rights advocates, security researchers and reviewers (who would benefit from this data living in the public domain) — it is also a potential gold-mine for car manufacturers who could sell it to insurers, market researchers and other deep-pocketed corporate interests who can profit by hiding that data from the public who generate it and who must share their cities and streets with high-speed killer robots.

Source: EU hijacking: self-driving car data will be copyrighted…by the manufacturer / Boing Boing

Ancestry Sites Could Soon Expose Nearly Anyone’s Identity, Researchers Say

Genetic testing has helped plenty of people gain insight into their ancestry, and some services even help users find their long-lost relatives. But a new study published this week in Science suggests that the information uploaded to these services can be used to figure out your identity, regardless of whether you volunteered your DNA in the first place.

The researchers behind the study were inspired by the recent case of the alleged Golden State Killer.

Earlier this year, Sacramento police arrested 72-year-old Joseph James DeAngelo for a wave of rapes and murders allegedly committed by DeAngelo in the 1970s and 1980s. And they claimed to have identified DeAngelo with the help of genealogy databases.

Traditional forensic investigation relies on matching certain snippets of DNA, called short tandem repeats, to a potential suspect. But these snippets only allow police to identify a person or their close relatives in a heavily regulated database. Thanks to new technology, the investigators in the Golden State Killer case isolated the genetic material that’s now collected by consumer genetic testing companies from the suspected killer’s DNA left behind at a crime scene. Then they searched for DNA matches within these public databases.

This information, coupled with other historical records, such as newspaper obituaries, helped investigators create a family tree of the suspect’s ancestors and other relatives. After zeroing on potential suspects, including DeAngelo, the investigators collected a fresh DNA sample from DeAngelo—one that matched the crime scene DNA perfectly.

But while the detective work used to uncover DeAngelo’s alleged crimes was certainly clever, some experts in genetic privacy have been worried about the grander implications of this method. That includes Yaniv Erlich, a computer engineer at Columbia University and chief science officer at MyHeritage, an Israel-based ancestry and consumer genetic testing service.

Erlich and his team wanted to see how easy it would be in general to use the method to find someone’s identity by relying on the DNA of distant and possibly unknown family members. So they looked at more than 1.2 million anonymous people who had gotten testing from MyHeritage, and specifically excluded anyone who had immediate family members also in the database. The idea was to figure out whether a stranger’s DNA could indeed be used to crack your identity.

They found that more than half of these people had distant relatives—meaning third cousins or further—who could be spotted in their searches. For people of European descent, who made up 75 percent of the sample, the hit rate was closer to 60 percent. And for about 15 percent of the total sample, the authors were also able to find a second cousin.

Much like the Golden State investigators, the team found they could trace back someone’s identity in the database with relative ease by using these distant relatives and other demographic but not overly specific information, such as the target’s age or possible state residence.

[…]

According to the researchers, it will take only about 2 percent of an adult population having their DNA profiled in a database before it becomes theoretically possible to trace any person’s distant relatives from a sample of unknown DNA—and therefore, to uncover their identity. And we’re getting ever closer to that tipping point.

“Once we reach 2 percent, nearly everyone will have a third cousin match, and a substantial amount will have a second cousin match,” Erlich explained. “My prediction is that for people of European descent, we’ll reach that threshold within two or three years.”

[…]

What this means for you: If you want to protect your genetic privacy, the best thing you can do is lobby for stronger legal protections and regulations. Because whether or not you’ve ever submitted your DNA for testing, someone, somewhere, is likely to be able to pick up your genetic trail.

Source: Ancestry Sites Could Soon Expose Nearly Anyone’s Identity, Researchers Say

The US Democracy is turning away so many people at polling stations, they need a What to Do If You’re Turned Away at the Polls guide

Several states have instituted stricter voter ID laws since the 2016 presidential election; more, still, are purging voter rolls in the lead up to the election, and the recent Supreme Court decision to uphold Ohio’s aggressive purging law means you can expect many more people to be removed. So, even if you’re registered to vote (and you should really double check) you might find yourself turned away at the polls come November 6.

Source: What to Do If You’re Turned Away at the Polls

One man, one vote? Not so much. Two parties and no voters.

Instagram explores sharing your precise location history with Facebook even when not using the app

Instagram is currently testing a feature that would allow it to share your location data with Facebook, even when you’re not using the app, reports app researcher Jane Manchun Wong (via TechCrunch). The option, which Wong notes is being tested as a setting you have to opt-in to, allows Facebook products to “build and use a history of precise locations” which the company says “helps you explore what’s around you, get more relevant ads and helps improve Facebook.” When activated, the service will report your location “even if you leave the app.”

The discovery of the feature comes just weeks after Instagram’s co-founders resigned from the company, reportedly as a result of Facebook CEO Mark Zuckerberg’s meddling in the service. Examples of this meddling include removing Instagram’s attribution from posts re-shared to Facebook, and badged notifications inside Instagram that encouraged people to open the Facebook app. With the two men who were deeply involved in the day-to-day running of Instagram now gone, such intrusions are expected to increase.

Instagram is not the only service that Facebook has sought to share data between. Back in 2016 the company announced that it would be sharing user data between WhatsApp and Facebook in order to offer better friend suggestions. The practice was later halted in the European Union thanks to its GDPR legislation, although WhatsApp’s CEO and co-founder later left over data privacy concerns.

Source: Instagram explores sharing your precise location history with Facebook – The Verge

Wait – instagram continually monitors your location too?!

Lawyers for Vizio data grabbing Smart TV owners propose final deal, around $20 per person. Lawyers themselves get $5.6 million.

Lawyers representing Vizio TV owners have asked a federal judge in Orange County, California to sign off on a proposed class-action settlement with the company for $17 million, for an affected class of 16 million people, who must opt-in to get any money. Vizio also agrees to delete all data that it collected.

Notice of the lawsuit will be shown directly on the Vizio Smart TVs, three separate times, as well as through paper mailings.

When it’s all said and done, new court filings submitted on Thursday say each of those 16 million people will get a payout of somewhere between $13 and $31. By contrast, their lawyers will collectively earn a maximum payout of $5.6 million in fees.

Source: Lawyers for Vizio Smart TV owners propose final deal, around $20 per person | Ars Technica