‘Royalty-Free’ Music Supplied By YouTube Audio Library Results in Mass Copyright claims to all YouTube income by Sony – for using a sample from a 1956(!!!!) song

A YouTuber who used a royalty-free track supplied by YouTube itself has had all of his videos copyright claimed by companies including SonyATV and Warner Chappell. According to the music outfits, Matt Lownes’ use the use of the track ‘Dreams’ by Joakim Karud means that they are now entitled to all of his revenue.

[…]

In common with many YouTubers, Matt didn’t want any copyright issues on his channel. So, to play things safely, he obtained the track ‘Dreams‘ by Joakim Karud from YouTube’s very own audio library for use in his intro. Unfortunately, this strategy of obtaining supposedly risk-free music from a legitimate source still managed to backfire. (See update below, YouTube statement)

Very early last Friday, Matt says he received a “massive barrage” of emails from YouTube, targeting “pretty much all” of his KSP videos. The emails said that Matt’s videos “may have content owned or licensed by SonyATV, PeerMusic, Warner Chappell, Audiam and LatinAutor.”

[…]

A clearly exasperated Matt took to YouTube, noting that any ads that now show up on his videos “split up the revenue between all the companies listed” in the emails, with Matt himself “allowed to keep what’s left of that.” He doesn’t know what that amount might be, because he says there’s just no way of knowing.

After highlighting the vague use of the word “may” in YouTube’s emails to him, Matt then went on to describe the real “kick in the gut”, which revolves around the track itself.

‘Dreams’ composer Joakim Karud allows anyone to use his music on YouTube, even commercially, for free. And the fact that Matt downloaded the track from YouTube’s own library was the icing on this particularly bitter cake.

Matt said he had to time out to manually protest the automated claims against his account but he says his overtures were immediately rejected, “almost like it’s an automated bot or something.” But things get worse from there.

After contesting each claim and having all of those rejected, Matt says the only option left is to appeal every single one. However, if an appeal is lost, the video in question will be removed completely and a strike will be placed against his account.

It’s three strikes and you’re out on YouTube, so this is not an attractive option for Matt if the music companies somehow win the fight. So, instead, Matt is appealing against just one of the complaints in the hope that he can make some progress without putting his entire account at risk.

[…]

“SonyATV & Warner Chappell have claimed 24 of my videos because the royalty free song Dreams by Joakim Karud (from the OFFICIAL YOUTUBE AUDIO LIBRARY BTW) uses a sample from Kenny Burrell Quartet’s ‘Weaver of Dream’,” a Twitter user wrote on Saturday.

Sure enough, if one turns to the WhoSampled archive, Dreams is listed as having sampled Weaver of Dreams, a track from 1956 to which Sony/ATV Music Publishing LLC and Warner/Chappell Music, Inc. own the copyrights.

[…]

YouTube have been in touch to state that the music in question was not part of its official audio library. In a tweet directed at Matt Lowne, YouTube further added that it may have been made available by an unofficial channel that confusingly calls itself the YouTube Audio Library.

Source: ‘Royalty-Free’ Music Supplied By YouTube Results in Mass Video Demonetization (Updated) – TorrentFreak

There we go, copyright is completely insane.

The USPTO wants to know if artificial intelligence can own the content it creates

The US office responsible for patents and trademarks is trying to figure out how AI might call for changes to copyright law, and it’s asking the public for opinions on the topic. The United States Patent and Trademark Office (USPTO) published a notice in the Federal Register last month saying it’s seeking comments, as spotted by TorrentFreak.

The office is gathering information about the impact of artificial intelligence on copyright, trademark, and other intellectual property rights. It outlines thirteen specific questions, ranging from what happens if an AI creates a copyright-infringing work to if it’s legal to feed an AI copyrighted material.

It starts off by asking if output made by AI without any creative involvement from a human should qualify as a work of authorship that’s protectable by US copyright law. If not, then what degree of human involvement “would or should be sufficient so that the work qualifies for copyright protection?”

Other questions ask if the company that trains an AI should own the resulting work, and if it’s okay to use copyrighted material to train an AI in the first place. “Should authors be recognized for this type of use of their works?” asks the office. “If so, how?”

The office, which, among other things, advises the government on copyright, often seeks public opinion to understand new developments and hear from people who actually deal with them. Earlier this year, the office similarly asked for public opinion on AI and patents.

Source: The USPTO wants to know if artificial intelligence can own the content it creates – The Verge

Germany forces Apple to allow use of iPhone’s NFC chip to other payment providers, breaks some little part of the monopoly

A new German law passed yesterday requires Apple to allow other mobile payments services access to the iPhone’s NFC chip for payments to allow them to fully compete with Apple Pay.

Apple initially completely locked down the NFC chip so that it could be used only by Apple Pay. It later allowed some third-party apps to use the chip but has always refused to do so for other mobile payment apps

Banks have been demanding access to the NFC chip for their own payment apps since 2016. Australia’s three biggest banks claimed that locking them out of the NFC chip was anti-competitive behavior.

National Australia Bank, Commonwealth Bank of Australia and Westpac Banking Corp all want the right to access the NFC chip in iPhones for their own mobile wallet apps.

Reuters reports that the law doesn’t name Apple specifically, but would apply to the tech giant. The piece somewhat confusingly refers to access to the NFC chip by third-party payment apps as Apple Pay.

A German parliamentary committee unexpectedly voted in a late-night session on Wednesday to force the tech giant to open up Apple Pay to rival providers in Germany.

This came in the form of an amendment to an anti-money laundering law that was adopted late on Thursday by the full parliament and is set to come into effect early next year.

The legislation, which did not name Apple specifically, will force operators of electronic money infrastructure to offer access to rivals for a reasonable fee.

Source: iPhone’s NFC chip should be open to other mobile wallet apps – 9to5Mac

Americans and Privacy: Concerned, Confused and Feeling Lack of Control Over Their Personal Information

A majority of Americans believe their online and offline activities are being tracked and monitored by companies and the government with some regularity. It is such a common condition of modern life that roughly six-in-ten U.S. adults say they do not think it is possible to go through daily life without having data collected about them by companies or the government.

[…]

large shares of U.S. adults are not convinced they benefit from this system of widespread data gathering. Some 81% of the public say that the potential risks they face because of data collection by companies outweigh the benefits, and 66% say the same about government data collection. At the same time, a majority of Americans report being concerned about the way their data is being used by companies (79%) or the government (64%). Most also feel they have little or no control over how these entities use their personal information,

[…]

Fully 97% of Americans say they are ever asked to approve privacy policies, yet only about one-in-five adults overall say they always (9%) or often (13%) read a company’s privacy policy before agreeing to it. Some 38% of all adults maintain they sometimes read such policies, but 36% say they never read a company’s privacy policy before agreeing to it.

[…]

Among adults who say they ever read privacy policies before agreeing to their terms and conditions, only a minority – 22% – say they read them all the way through before agreeing to their terms and conditions.

There is also a general lack of understanding about data privacy laws among the general public: 63% of Americans say they understand very little or nothing at all about the laws and regulations that are currently in place to protect their data privacy.

Source: Americans and Privacy: Concerned, Confused and Feeling Lack of Control Over Their Personal Information | Pew Research Center

PayPal Pulls Out of Pornhub, Hurting ‘Hundreds of Thousands’ of Performers, because American companies are prudish? What happened to the US of Woodstock, hippies and free love?

Late Wednesday night, Pornhub announced that PayPal is no longer supporting payments for Pornhub—a decision that will impact thousands of performers using the site as a source of income.

Most visitors to Pornhub likely think of it as a website that simply provides access to an endless supply of free porn, but Pornhub also allows performers to upload, sell, and otherwise monetize videos they make themselves. Performers who used PayPal to get paid for this work now have to switch to a different payment method.

“We are all devastated by PayPal’s decision to stop payouts to over a hundred thousand performers who rely on them for their livelihoods,” the company said on its blog. It then directed models to set up a new payment method, with instructions on how PayPal users can transfer pending payments.

“We sincerely apologize if this causes any delays and we will have staff working around the clock to make sure all payouts are processed as fast as possible on the new payment methods,” the statement said.

A PayPal spokesperson told Motherboard: “Following a review, we have discovered that Pornhub has made certain business payments through PayPal without seeking our permission. We have taken action to stop these transactions from occurring.”

PayPal is one of many payment processors that have discriminated against sex workers for years. Its acceptable use policy states that “certain sexually oriented materials or services” are forbidden—phrasing that’s intentionally vague enough to allow circumstances like this to happen whenever the company wants.

Are you a sex worker who has been impacted by this situation, or by any payment processors discriminating against your work? We’d love to hear from you. Contact Samantha Cole securely on Signal at +6469261726, direct message on Twitter, or by email.

The list of payment platforms, payment apps, and banks that forbid sexual services in their terms of use is very, very long, and includes everything from Venmo to Visa. Many of these terms have been in place for nearly a decade—and payment processors have been hostile toward sex work long before harmful legislation like the Fight Online Sex Trafficking Act came into law last year. But those laws only help to embolden companies to kick sex workers off their platforms, and make the situation even more confusing and frustrating for performers.

Source: PayPal Pulls Out of Pornhub, Hurting ‘Hundreds of Thousands’ of Performers – VICE

Health websites are sharing sensitive medical data with Google, Facebook, and Amazon

Popular health websites are sharing private, personal medical data with big tech companies, according to an investigation by the Financial Times. The data, including medical diagnoses, symptoms, prescriptions, and menstrual and fertility information, are being sold to companies like Google, Amazon, Facebook, and Oracle and smaller data brokers and advertising technology firms, like Scorecard and OpenX.

The investigation: The FT analyzed 100 health websites, including WebMD, Healthline, health insurance group Bupa, and parenting site Babycentre, and found that 79% of them dropped cookies on visitors, allowing them to be tracked by third-party companies around the internet. This was done without consent, making the practice illegal under European Union regulations. By far the most common destination for the data was Google’s advertising arm DoubleClick, which showed up in 78% of the sites the FT tested.

Responses: The FT piece contains a list of all the comments from the many companies involved. Google, for example, said that it has “strict policies preventing advertisers from using such data to target ads.” Facebook said it was conducting an investigation and would “take action” against websites “in violation of our terms.” And Amazon said: “We do not use the information from publisher websites to inform advertising audience segments.”

A window into a broken industry: This sort of rampant rule -breaking has been a dirty secret in the advertising technology industry, which is worth $200 billion globally, ever since EU countries adopted the General Data Protection Regulation in May 2018. A recent inquiry by the UK’s data regulator found that the sector is rife with illegal practices, as in this case where privacy policies did not adequately outline which data would be shared with third parties or what it would be used for. The onus is now on EU and UK authorities to act to put an end to them.

Source: Health websites are sharing sensitive medical data with Google, Facebook, and Amazon – MIT Technology Review

Facebook says government demands for user data are at a record high, most by US govt

The social media giant said the number of government demands for user data increased by 16% to 128,617 demands during the first half of this year compared to the second half of last year.

That’s the highest number of government demands it has received in any reporting period since it published its first transparency report in 2013.

The U.S. government led the way with the most number of requests — 50,741 demands for user data resulting in some account or user data given to authorities in 88% of cases. Facebook said two-thirds of all the U.S. government’s requests came with a gag order, preventing the company from telling the user about the request for their data.

But Facebook said it was able to release details of 11 so-called national security letters (NSLs) for the first time after their gag provisions were lifted during the period. National security letters can compel companies to turn over non-content data at the request of the FBI. These letters are not approved by a judge, and often come with a gag order preventing their disclosure. But since the Freedom Act passed in 2015, companies have been allowed to request the lifting of those gag orders.

The report also said the social media giant had detected 67 disruptions of its services in 15 countries, compared to 53 disruptions in nine countries during the second half of last year.

And, the report said Facebook also pulled 11.6 million pieces of content, up from 5.8 million in the same period a year earlier, which Facebook said violated its policies on child nudity and sexual exploitation of children.

The social media giant also included Instagram in its report for the first time, including removing 1.68 million pieces of content during the second and third quarter of the year.

Source: Facebook says government demands for user data are at a record high | TechCrunch

Fighting Disinformation Online: A Database of Web Tools

The rise of the internet and the advent of social media have fundamentally changed the information ecosystem, giving the public direct access to more information than ever before. But it’s often nearly impossible to distinguish between accurate information and low-quality or false content. This means that disinformation — false or intentionally misleading information that aims to achieve an economic or political goal — can become rampant, spreading further and faster online than it ever could in another format.

As part of its Truth Decay initiative, RAND is responding to this urgent problem. Researchers identified and characterized the universe of online tools developed by nonprofits and civil society organizations to target online disinformation. The tools in this database are aimed at helping information consumers, researchers, and journalists navigate today’s challenging information environment. Researchers identified and characterized each tool on a number of dimensions, including the type of tool, the underlying technology, and the delivery format.

Source: Fighting Disinformation Online: A Database of Web Tools | RAND

Facebook bug shows camera activated in background during app use – the bug being that you could see the camera being activated

When you’re scrolling through Facebook’s app, the social network could be watching you back, concerned users have found. Multiple people have found and reported that their iPhone cameras were turned on in the background while they were looking at their feed.

The issue came to light through several posts on Twitter. Users noted that their cameras were activated behind Facebook’s app as they were watching videos or looking at photos on the social network.

After people clicked on the video to full screen, returning it back to normal would create a bug in which Facebook’s mobile layout was slightly shifted to the right. With the open space on the left, you could now see the phone’s camera activated in the background.

This was documented in multiple cases, with the earliest incident on Nov. 2.

It’s since been tweeted a couple other times, and CNET has also been able to replicate the issue.

Facebook didn’t immediately respond to a request for comment, but Guy Rosen, its vice president of integrity, tweeted Tuesday that this seems like a bug and the company’s looking into the matter.

Source: Facebook bug shows camera activated in background during app use – CNET

Facebook has to stop fake ads of celebrity endorsement of Cryptocurrencies in NL

John de Mol has successfully sued FB and forced them to remove fake ads in which it seems he endorses bitcoins and other cryptocurrencies (he doesn’t).  They will not be allowed in the future either and FB  must give him the details of the parties who placed the adverts on FB. FB is liable for fines up to EUR 1.1 million if they don’t comply.

Between Oktober 2018 and at least March 2019 a series of fake ads were placed on FB and Instagram that had him endorsing the crypto. He didn’t endorse them at all and not only that, they were a scam: the buyers never received any crypto after purchasing from the sites. The scammers received at least EUR 1.7 million.

The court did not accept FB’s argument that they are a neutral party just passing on information. The court argues that FB has a responsibility to guard against breaches of third party rights. After John de Mol had contacted FB and the ads decreased drastically in frequency shows the court that it is well within FB’s technical possibilities to guard against these breaches.

Source: Facebook moet nepadvertenties John de Mol weren – Emerce

Study of over 11,000 online stores finds ‘dark patterns’ on 1,254 sites

A large-scale academic study that analyzed more than 53,000 product pages on more than 11,000 online stores found widespread use of user interface “dark patterns”– practices meant to mislead customers into making purchases based on false or misleading information.

The study — presented last week at the ACM CSCW 2019 conference — found 1,818 instances of dark patterns present on 1,254 of the ∼11K shopping websites (∼11.1%) researchers scanned.

“Shopping websites that were more popular, according to Alexa rankings, were more likely to feature dark patterns,” researchers said.

But while the vast majority of UI dark patterns were meant to trick users into subscribing to newsletters or allowing broad data collection, some dark patterns were downright foul, trying to mislead users into making additional purchases, either by sneaking products into shopping carts or tricking users into believing products were about to sell out.

Of these, the research team found 234 instances, deployed across 183 websites.

Below are some of the examples of UI dark patterns that the research team found currently employed on today’s most popular online stores.

1. Sneak into basked

Adding additional products to users’ shopping carts without their consent.

Prevalence: 7 instances across 7 websites.

dark-patterns-1.png
Image: Arunesh et al.

2. Hidden costs

Revealing previously undisclosed charges to users right before they make a purchase.

Prevalence: 5 instances across 5 websites.

dark-patterns-2.png
Image: Arunesh et al.

3. Hidden subscription

Charging users a recurring fee under the pretense of a one-time fee or a free trial.

Prevalence: 14 instances across 13 websites.

dark-patterns-3.png
Image: Arunesh et al.

4. Countdown timer

Indicating to users that a deal or discount will expire using a counting-down timer.

Prevalence: 393 instances across 361 websites.

dark-patterns-4.png
Image: Arunesh et al.

5. Limited-time message

Indicating to users that a deal or sale will expire will expire soon without specifying a deadline, thus creating uncertainty.

Prevalence: 88 instances across 84 websites.

dark-patterns-5.png
Image: Arunesh et al.

6. Confirmshaming

Using language and emotion (shame) to steer users away from making a certain choice.

Prevalence: 169 instances across 164 websites.

dark-patterns-6.png
Image: Arunesh et al.

7. Visual interference

Using style and visual presentation to steer users to or away from certain choices.

Prevalence: 25 instances across 24 websites.

dark-patterns-7.png
Image: Arunesh et al.

8. Trick questions

Using confusing language to steer users into making certain choices.

Prevalence: 9 instances across 9 websites.

dark-patterns-8.png
Image: Arunesh et al.

9. Pressured selling

Pre-selecting more expensive variations of a product, or pressuring the user to accept the more expensive variations of a product and related products.

Prevalence: 67 instances across 62 websites.

dark-patterns-9.png
Image: Arunesh et al.

10. Activity messages

Informing the user about the activity on the website (e.g., purchases, views, visits).

Prevalence: 313 instances across 264 websites.

dark-patterns-10.png
Image: Arunesh et al.

11. Testimonials of uncertain origin

Testimonials on a product page whose origin is unclear.

Prevalence: 12 instances across 12 websites

dark-patterns-11.png
Image: Arunesh et al.

12. Low-stock message

Indicating to users that limited quantities of a product are available, increasing its desirability.

Prevalence: 632 instances across 581 websites.

dark-patterns-12.png
Image: Arunesh et al.

13. High-demand message

Indicating to users that a product is in high-demand and likely to sell out soon, increasing its desirability

Prevalence: 47 instances across 43 websites.

dark-patterns-13.png
Image: Arunesh et al.

14. Hard to cancel

Making it easy for the user to sign up for a recurring subscription but cancellation requires emailing or calling customer care.

Prevalence: 31 instances across 31 websites.

dark-patterns-14.png
Image: Arunesh et al.

15. Forced enrollment

Coercing users to create accounts or share their information to complete their tasks.

Prevalence: 6 instances across 6 websites.

dark-patterns-15.png
Image: Arunesh et al.

The research team behind this project, made up of academics from Princeton University and the University of Chicago, expect these UI dark patterns to become even more popular in the coming years.

One reason, they said, is that there are third-party companies that currently offer dark patterns as a turnkey solution, either in the form of store extensions and plugins or on-demand store customization services.

The table below contains the list of 22 third-parties that the research team identified following their study as providers of turnkey solutions for dark pattern-like behavior.

dark-patterns-third-parties.png
Image: Arunesh et al.

Readers can find out more about dark patterns on modern online store from this whitepaper called “Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites.”

The researchers’ raw scan data and tools can be downloaded from this GitHub repository.

Source: Study of over 11,000 online stores finds ‘dark patterns’ on 1,254 sites | ZDNet

Google Reportedly Amassed Private Health Data on Millions of People Without Their Knowledge – a repeat of October 2019 and 2017 in the UK

The Wall Street Journal reported Monday that the tech giant partnered with Ascension, a non-profit and Catholic health systems company, on the program code-named “Project Nightingale.” According to the Journal, Google began its initiative with Ascension last year, and it involves everything from diagnoses, lab results, birth dates, patient names, and other personal health data—all of it reportedly handed over to Google without first notifying patients or doctors. The Journal said this amounts to data on millions of Americans spanning 21 states.

“By working in partnership with leading healthcare systems like Ascension, we hope to transform the delivery of healthcare through the power of the cloud, data analytics, machine learning, and modern productivity tools—ultimately improving outcomes, reducing costs, and saving lives,” Tariq Shaukat, president of Google Cloud, said in a statement.

Beyond the alarming reality that a tech company can collect data about people without their knowledge for its own uses, the Journal noted it’s legal under the Health Insurance Portability and Accountability Act (HIPAA). When reached for comment, representatives for both companies pointed Gizmodo to a press release about the relationship—which the Journal stated was published after its report—that states: “All work related to Ascension’s engagement with Google is HIPAA compliant and underpinned by a robust data security and protection effort and adherence to Ascension’s strict requirements for data handling.”

Still, the Journal report raises concerns about whether the data handling is indeed as secure as both companies appear to think it is. Citing a source familiar with the matter as well as related documents, the paper said at least 150 employees at Google have access to a significant portion of the health data Ascension handed over on millions of people.

Google hasn’t exactly proven itself to be infallible when it comes to protecting user data. Remember when Google+ users had their data exposed and Google did nothing to alert in order to shield its own ass? Or when a Google contractor leaked more than a thousand Assistant recordings, and the company defended itself by claiming that most of its audio snippets aren’t reviewed by humans? Not exactly the kind of stuff you want to read about a company that may have your medical history on hand.

Source: Google Reportedly Amassed Private Health Data on Millions of People Without Their Knowledge

Google has been given the go-ahead to access five years’ worth of sensitive NHS patient data.

In a deal signed last month, the internet giant was handed hospital records of thousands of patients in England.

New documents show the data will include medical history, diagnoses, treatment dates and ethnic origin.

The news has raised concerns about the privacy of the data, which could now be harvested and commercialised.

It comes almost a year after Google absorbed the London-based AI lab DeepMind Health, a leading health technology developer.

DeepMind was bought by Google’s parent company Alphabet for £400 million ($520m) in 2014 and up until November 2018 had maintained independence.

But as of this year DeepMind transferred control of its health division to the parent company in California.

DeepMind had contracts to process medical record from three NHS trusts covering nine hospitals in England to develop its Streams mobile application.

From Google gets green light to access FIVE YEARS’ worth of sensitive patient data from NHS, sparking privacy fears

a data-sharing agreement between Google-owned artificial intelligence company DeepMind and the Royal Free NHS Trust – gives the clearest picture yet of what the company is doing and what sensitive data it now has access to.

The agreement gives DeepMind access to a wide range of healthcare data on the 1.6 million patients who pass through three London hospitals run by the Royal Free NHS Trust – Barnet, Chase Farm and the Royal Free – each year. This will include information about people who are HIV-positive, for instance, as well as details of drug overdoses and abortions. The agreement also includes access to patient data from the last five years.

“The data-sharing agreement gives Google access to information on millions of NHS patients”

DeepMind announced in February that it was working with the NHS, saying it was building an app called Streams to help hospital staff monitor patients with kidney disease. But the agreement suggests that it has plans for a lot more.

This is the first we’ve heard of DeepMind getting access to historical medical records, says Sam Smith, who runs health data privacy group MedConfidential. “This is not just about kidney function. They’re getting the full data.”

The agreement clearly states that Google cannot use the data in any other part of its business. The data itself will be stored in the UK by a third party contracted by Google, not in DeepMind’s offices. DeepMind is also obliged to delete its copy of the data when the agreement expires at the end of September 2017.

All data needed

Google says that since there is no separate dataset for people with kidney conditions, it needs access to all of the data in order to run Streams effectively. In a statement, the Royal Free NHS Trust says that it “provides DeepMind with NHS patient data in accordance with strict information governance rules and for the purpose of direct clinical care only.”

source: Revealed: Google AI has access to huge haul of NHS patient data (2017)

DHS expects to have detailed biometrics on 260 million people by 2022 – and will keep them in the cloud, where they will never be stolen or hacked *cough*

The US Department of Homeland Security (DHS) expects to have face, fingerprint, and iris scans of at least 259 million people in its biometrics database by 2022, according to a recent presentation from the agency’s Office of Procurement Operations reviewed by Quartz.

That’s about 40 million more than the agency’s 2017 projections, which estimated 220 million unique identities by 2022, according to previous figures cited by the Electronic Frontier Foundation (EFF), a San Francisco-based privacy rights nonprofit.

A slide deck, shared with attendees at an Oct. 30 DHS industry day, includes a breakdown of what its systems currently contain, as well as an estimate of what the next few years will bring. The agency is transitioning from a legacy system called IDENT to a cloud-based system (hosted by Amazon Web Services) known as Homeland Advanced Recognition Technology, or HART. The biometrics collection maintained by DHS is the world’s second-largest, behind only India’s countrywide biometric ID network in size. The traveler data kept by DHS is shared with other US agencies, state and local law enforcement, as well as foreign governments.

The first two stages of the HART system are being developed by US defense contractor Northrop Grumman, which won the $95 million contract in February 2018. DHS wasn’t immediately available to comment on its plans for its database.

[…]

Last month’s DHS presentation describes IDENT as an “operational biometric system for rapid identification and verification of subjects using fingerprints, iris, and face modalities.” The new HART database, it says, “builds upon the foundational functionality within IDENT,” to include voice data, DNA profiles, “scars, marks, and tattoos,” and the as-yet undefined “other biometric modalities as required.” EFF researchers caution some of the data will be “highly subjective,” such as information gleaned during “officer encounters” and analysis of people’s “relationship patterns.”

EFF worries that such tracking “will chill and deter people from exercising their First Amendment protected rights to speak, assemble, and associate,” since such specific data points could be used to identify “political affiliations, religious activities, and familial and friendly relationships.”

[…]

EFF researchers said in a 2018 blog post that facial-recognition software, like what the DHS is using, is “frequently…inaccurate and unreliable.” DHS’s own tests found the systems “falsely rejected as many as 1 in 25 travelers,” according to EFF, which calls out potential foreign partners in countries such as the UK, where false-positives can reportedly reach as high as 98%. Women and people of color are misidentified at rates significantly higher than whites and men, and darker skin tones increase one’s chances of being improperly flagged.

“DHS is also partnering with airlines and other third parties to collect face images from travelers entering and leaving the US,” the EFF said. “When combined with data from other government agencies, these troubling collection practices will allow DHS to build a database large enough to identify and track all people in public places, without their knowledge—not just in places the agency oversees, like airports, but anywhere there are cameras.”

Source: DHS expects to have biometrics on 260 million people by 2022 — Quartz

T-Mobile says it owns exclusive rights to the color magenta and the letter T. German court agrees.

Startup insurance provider Lemonade is trying to make the best of a sour situation after T-Mobile parent Deutsche Telekom claimed it owns the exclusive rights to the color magenta.

New York-based Lemonade is a 3-year-old company that lives completely online and mostly focuses on homeowners and renter’s insurance. The company uses a similar color to magenta — it says it’s “pink” — in its marketing materials and its website. But Lemonade was told by German courts that it must cease using its color after launching its services in that country, which is also home to T-Mobile owner Deutsche Telekom. Although the ruling only applies in Germany, Lemonade says it fears the decision will set a precedent and expand to other jurisdictions such as the U.S. or Europe.

“If some brainiac at Deutsche Telekom had invented the color, their possessiveness would make sense,” Daniel Schreiber, CEO and co-founder of Lemonade, said in a statement. “Absent that, the company’s actions just smack of corporate bully tactics, where legions of lawyers attempt to hog natural resources – in this case a primary color—that rightfully belong to everyone.”

A spokesman for Deutsche Telekom confirmed that it “asked the insurance company Lemonade to stop using the color magenta in the German market,” while adding that the “T” in “Deutsche Telekom” is registered to the brand. “Deutsche Telekom respects everyone’s trademark rights but expects others to do the same,” the spokesman said in an emailed statement to Ad Age.

Although Lemonade has complied with the ruling by removing its pink color from marketing materials in Germany, it’s also trying to turn the legal matter into an opportunity. The company today began throwing some shade in social media under the hashtag “#FreeThePink,” though a quick check on Twitter shows it’s gained little traction thus far: Schreiber, the company’s CEO, holds the top tweet under “#FreeThePink” with 13 retweets and 42 likes. 

Lemonade also filed a motion today with the European Union Intellectual Property Office, or EUIPO, to invalidate Deutsche Telekom’s magenta trademark.

Source: T-Mobile says it owns exclusive rights to the color magenta | AdAge

What. The. Fuck.

Facebook says 100 developers may have improperly accessed user data, like Cambridge Analytica did

Facebook on Tuesday disclosed that as many as 100 software developers may have improperly accessed user data, including the names and profile pictures of people in specific groups on the social network.

The company recently discovered that some apps retained access to this type of user data despite making changes to its service in April 2018 to prevent this, Facebook said in a blog post. The company said it has removed this access and reached out to 100 developer partners who may have accessed the information. Facebook said that at least 11 developer partners accessed this type of data in the last 60 days.

“Although we’ve seen no evidence of abuse, we will ask them to delete any member data they may have retained and we will conduct audits to confirm that it has been deleted,” the company said in the blog post.

The company did not say how many users were affected.

Facebook has been restricting software developer access to its user data following reports in March 2018 that political consulting firm Cambridge Analytica had improperly accessed the data of 87 million Facebook users, potentially to influence the outcome of the 2016 U.S. presidential election.

Source: Facebook says 100 developers may have improperly accessed user data

NL ISP Ziggo doesn’t have to share customer details of downloaders

Dutch Filmworks demanded the subscriber data linked to 377 IP adresses they determined illegally downloaded a movie. The judge said no, due to a complete lack of transparency by DFW on how their decision tree works and the amount of money they want to fine the suspects.

Source: Ziggo hoeft geen klantgegevens downloaders te delen – Emerce

Hooray for someone not letting the movie mafia take the law into their own hands!

How to Automatically Delete some of Your Google Data

How to auto-delete your Google data

This process is almost identical on both mobile and web. We’ll focus on the latter, but the former is easy to figure out, too:

  1. Go to your Google activity dashboard (you’ll need to sign in to your Google account first).
  2. Click “Activity controls” from the left-hand sidebar.
  3. Scroll down to the data type you wish to manage, then select “Manage Activity.”
  4. On this next page, click on “Choose how long to keep” under the calendar icon.
  5. Select the auto-deletion time you wish (three or 18 months), or you can choose to delete your data manually.
  6. Click “Next” to save your changes.
  7. Repeat these steps for each of the types of data you want to be auto-deleted. For your Location History in particular, you’ll need to click on “Today” in the upper-left corner first, and then click on the gear icon in the lower-right corner of your screen. Then, select “Automatically delete Location History,” and pick a time.

Source: How to Automatically Delete Your Google Data, and Why You Should

Tech and mobile companies want to monetise your data … but are scared of GDPR  – good, that means GDPR works!

The vast majority of technology, media and telecom (TMT) companies want to monetise customer data, but are concerned about regulations such as Europe’s GDPR, according to research from law firm Simmons & Simmons.

The outfit surveyed 350 global business leaders in the TMT sector to understand their approach to data commercialisation. It found that 78 per cent of companies have some form of data commercialisation in place but only 20 per cent have an overarching plan for its use.

Alex Brown, global head of TMT Sector at Simmons & Simmons, observed that the firm’s clients are increasingly seeking advice on the legal ways they can monetise data. He said that can either be for internal use, how to use insights into customer behaviour to improve services, or ways to sell anonymised data to third parties.

One example of data monetisation within the sector is Telefónica’s Smart Steps business, which uses “fully anonymised and aggregated mobile network data to measure and compare the number of people visiting an area at any time”.

That information is then sold on to businesses to provide insight into their customer base.

Brown said: “All mobile network operators know your location because the phone is talking to the network, so through that they know a lot about people’s movement. That aggregated data could be used by town planners, transport networks, retailers work out best place to site new store.”

However, he added: “There is a bit of a data paralysis at the moment. GDPR and what we’ve seen recently in terms of enforcement – albeit related to breaches – and the Google fine in France… has definitely dampened some innovation.”

Earlier this year France’s data protection watchdog fined Google €50m for breaching European Union online privacy rules, the biggest penalty levied against a US tech giant. It said Google lacked transparency and clarity in the way it informs users about its handling of personal data and failed to properly obtain their consent for personalised ads.

But Brown pointed out that as long as privacy policies are properly laid out and the data is fully anonymised, companies wanting to make money off data should not fall foul of GDPR.

Source: Tech and mobile companies want to monetise your data … but are scared of GDPR • The Register

Google Sidewalk Labs document reveals company’s early vision for big brother city in city with private tax powers, criminal justice and huge personal data slurp based on a social credit system

A confidential Sidewalk Labs document from 2016 lays out the founding vision of the Google-affiliated development company, which included having the power to levy its own property taxes, track and predict people’s movements and control some public services.

The document, which The Globe and Mail has seen, also describes how people living in a Sidewalk community would interact with and have access to the space around them – an experience based, in part, on how much data they’re willing to share, and which could ultimately be used to reward people for “good behaviour.”

Known internally as the “yellow book,” the document was designed as a pitch book for the company, and predates Sidewalk’s relationship and formal agreements with Toronto by more than a year. Peppered with references to Disney theme parks and noted futurist Buckminster Fuller, it says Sidewalk intended to “overcome cynicism about the future.”

But the 437-page book documents how much private control of city services and city life Google parent company Alphabet Inc.’s leadership envisioned when it created the company,

[…]

“The ideas contained in this 2016 internal paper represent the result of a wide-ranging brainstorming process very early in the company’s history,” Sidewalk spokesperson Keerthana Rang said. “Many, if not most, of the ideas it contains were never under consideration for Toronto or discussed with Waterfront Toronto and governments. The ideas that we are actually proposing – which we believe will achieve a new model of inclusive urban growth that makes housing more affordable for families, creates new jobs for residents, and sets a new standard for a healthier planet – can all be found at sidewalktoronto.ca.”

[…]

To carry out its vision and planned services, the book states Sidewalk wanted to control its area much like Disney World does in Florida, where in the 1960s it “persuaded the legislature of the need for extraordinary exceptions.” This could include granting Sidewalk taxation powers. “Sidewalk will require tax and financing authority to finance and provide services, including the ability to impose, capture and reinvest property taxes,” the book said. The company would also create and control its own public services, including charter schools, special transit systems and a private road infrastructure.

Sidewalk’s early data-driven vision also extended to public safety and criminal justice.

The book mentions both the data-collection opportunities for police forces (Sidewalk notes it would ask for local policing powers similar to those granted to universities) and the possibility of “an alternative approach to jail,” using data from so-called “root-cause assessment tools.” This would guide officials in determining a response when someone is arrested, such as sending someone to a substance abuse centre. The overall criminal justice system and policing of serious crimes and emergencies would be “likely to remain within the purview of the host government’s police department,” however.

Data collection plays a central role throughout the book. Early on, the company notes that a Sidewalk neighbourhood would collect real-time position data “for all entities” – including people. The company would also collect a “historical record of where things have been” and “about where they are going.” Furthermore, unique data identifiers would be generated for “every person, business or object registered in the district,” helping devices communicate with each other.

There would be a quid pro quo to sharing more data with Sidewalk, however. The document describes a tiered level of services, where people willing to share data can access certain perks and privileges not available to others. Sidewalk visitors and residents would be “encouraged to add data about themselves and connect their accounts, either to take advantage of premium services like unlimited wireless connectivity or to make interactions in the district easier,” it says.

Shoshana Zuboff, the Harvard University professor emerita whose book The Age of Surveillance Capitalism investigates the way Alphabet and other big-tech companies are reshaping the world, called the document’s revelations “damning.” The community Alphabet sought to build when it launched Sidewalk Labs, she said, was like a “for-profit China” that would “use digital infrastructure to modify and direct social and political behaviour.”

While Sidewalk has since moved away from many of the details in its book, Prof. Zuboff contends that Alphabet tends to “say what needs be said to achieve commercial objectives, while specifically camouflaging their actual corporate strategy.”

[…]

hose choosing to remain anonymous would not be able to access all of the area’s services: Automated taxi services would not be available to anonymous users, and some merchants might be unable to accept cash, the book warns.

The document also describes reputation tools that would lead to a “new currency for community co-operation,” effectively establishing a social credit system. Sidewalk could use these tools to “hold people or businesses accountable” while rewarding good behaviour, such as by rewarding a business’s good customer service with an easier or cheaper renewal process on its licence.

This “accountability system based on personal identity” could also be used to make financial decisions.

“A borrower’s stellar record of past consumer behaviour could make a lender, for instance, more likely to back a risky transaction, perhaps with the interest rates influenced by digital reputation ratings,” it says.

The company wrote that it would own many of the sensors it deployed in the community, foreshadowing a battle over data control that has loomed over the Toronto project.

Source: Sidewalk Labs document reveals company’s early vision for data collection, tax powers, criminal justice – The Globe and Mail

Facebook ends appeal against ICO Cambridge Analytica micro-fine: Doesn’t admit liability, gives away £500k

Facebook has ended its appeal against the UK Information Commissioner’s Office and will pay the outstanding £500,000 fine for breaches of data protection law relating to the Cambridge Analytica scandal.

Prior to today’s announcement, the social network had been appealing against the fine, alleging bias and requesting access to ICO documents related to the regulator’s decision making. The ICO, in turn, was appealing a decision that it should hand over these documents.

The issue for the watchdog was the misuse of UK citizens’ Facebook profile information, specifically the harvesting and subsequent sale of data scraped from their profiles to Cambridge Analytica, the controversial British consulting firm used by US prez Donald Trump’s election campaign.

The app that collected the data was “thisisyourdigitallife”, created by Cambridge developer Aleksandr Kogan. It hoovered up Facebook users’ profiles, dates of birth, current city, photos in which those users were tagged, pages they had liked, posts on their timeline, friends’ lists, email addresses and the content of Facebook messages. The data was then processed in order to create a personality profile of the user.

“Given the way our platform worked at the time,” Zuck has said, “this meant Kogan was able to access tens of millions of their friends’ data”. Facebook has always claimed it learned of the data misuse from news reports, though this has been disputed.

Both sides will now end the legal fight and Facebook will pay the ICO a fine but make no admission of liability or guilt. The money is not kept by the data protection watchdog but goes to the Treasury consolidated fund and both sides will pay their own costs. The ICO spent an eye-watering £2.5m on the Facebook probe.

Source: Facebook ends appeal against ICO micro-fine: Admit liability? Never. But you can have £500k • The Register

GitLab pulls U-turn on plan to crank up usage telemetry after both staff and customers cry foul

VP of product Scott Williamson announced on 10 October that “to make GitLab better faster, we need more data on how users are using GitLab”.

GitLab is a web application that runs on Linux, with options for self-hosting or using the company’s cloud service. It is open source, with both free and licensed editions.

Williamson said that while nothing was changing with the free self-hosted Community Edition, the hosted and licensed products would all now “include additional JavaScript snippets (both open source and proprietary) that will interact with both GitLab and possibly third-party SaaS telemetry services (we will be using Pendo)”. The only opt-out was to be support for the Do Not Track browser mechanism.

GitLab customers and even some staff were not pleased. For example, Yorick Peterse, a GitLab staff developer, said telemetry should be opt-in and that the requisite update to the terms of service would break some API usage (because bots do not know how to accept terms of service), adding: “We have plenty of customers who would not be able to use GitLab if it starts tracking data for on-premises installations.”

There is more background in the issue here, which concerns adding the identity of the user to the Snowplow analytics service used by GitLab.

“This effectively changes our Snowplow integration from being an anonymous aggregated thing to a thing that tracks user interaction,” engineering manager Lukas Eipert said back in July. “Ethically, I have problems with this and legally this could have a big impact privacy wise (GDPR). I hereby declare my highest degree of objection to this change that I can humanly express.”

On the other hand, GitLab CFO Paul Machle said: “This should not be an opt in or an opt out. It is a condition of using our product. There is an acceptance of terms and the use of this data should be included in that.”

On 23 October, an email was sent to GitLab customers announcing the changes.

Yesterday, however, CEO Sid Sijbrandij put the plans on hold, saying: “Based on considerable feedback from our customers, users, and the broader community, we reversed course the next day and removed those changes before they went into effect. Further, GitLab will commit to not implementing telemetry in our products that sends usage data to a third-party product analytics service.” Sijbrandij also promised a review of what went wrong. “We will put together a new proposal for improving the user experience and share it for feedback,” he said.

Despite this embarrassing backtrack, the incident has demonstrated that GitLab does indeed have an open process, with more internal discussion on view than would be the case with most companies. Nevertheless, the fact that GitLab came so close to using personally identifiable tracking without specific opt-in has tarnished its efforts to appear more community-driven than alternatives like Microsoft-owned GitHub. ®

Source: GitLab pulls U-turn on plan to crank up usage telemetry after both staff and customers cry foul • The Register

Google has officially purchased Fitbit for $2.1 billion. Now has your fitness data and a wearable OS that’s actually quite good.

Google’s Senior Vice President of Devices & Services, Rick Osterloh, broke the news on the official Google blog, saying:

Over the years, Google has made progress with partners in this space with Wear OS and Google Fit, but we see an opportunity to invest even more in Wear OS as well as introduce Made by Google wearable devices into the market. Fitbit has been a true pioneer in the industry and has created engaging products, experiences and a vibrant community of users. By working closely with Fitbit’s team of experts, and bringing together the best AI, software and hardware, we can help spur innovation in wearables and build products to benefit even more people around the world.

Earlier this week, on October 28, a report from Reuters surfaced to indicate that Google was in a bid to purchase Fitbit. It’s a big move, but it’s also one that makes good sense.

Google’s Wear OS wearable platform has been in something of a rut for the last few years. The company introduced the Android Wear to Wear OS rebrand in 2018 to revitalize its branding/image, but the hardware offerings have still been pretty ho-hum. Third-party watches like the Fossil Gen 5 have proven to be quite good, but without a proper “Made by Google” smartwatch and other major players, such as Samsung, ignoring the platform, it’s been left to just sort of exist.

Source: Google has officially purchased Fitbit for $2.1 billion | Android Central

Google Accused of Creating Spy Tool to Monitor Employees

Google employees are accusing the company’s leadership of developing an internal surveillance tool that they believe will be used to monitor workers’ attempts to organize protests and discuss labor rights.

Earlier this month, employees said they discovered that a team within the company was creating the new tool for the custom Google Chrome browser installed on all workers’ computers and used to search internal systems. The concerns were outlined in a memo written by a Google employee and reviewed by Bloomberg News and by three Google employees who requested anonymity because they aren’t authorized to talk to the press

Source: Google Accused of Creating Spy Tool to Monitor Employees – Bloomberg

BBC News launches ‘dark web’ Tor mirror

The BBC has made its international news website available via the Tor network, in a bid to thwart censorship attempts.

The Tor browser is privacy-focused software used to access the dark web.

The browser can obscure who is using it and what data is being accessed, which can help people avoid government surveillance and censorship.

Countries including China, Iran and Vietnam are among those who have tried to block access to the BBC News website or programmes.

Instead of visiting bbc.co.uk/news or bbc.com/news, users of the Tor browser can visit the new bbcnewsv2vjtpsuy.onion web address. Clicking this web address will not work in a regular web browser.

The dark web copy of the BBC News website will be the international edition, as seen from outside the UK.

It will include foreign language services such as BBC Arabic, BBC Persian and BBC Russian.

But UK-only content and services such as BBC iPlayer will not be accessible, due to broadcast rights.


What is Tor?

Tor is a way to access the internet that requires software, known as the Tor browser, to use it.

The name is an acronym for The Onion Router. Just as there are many layers to the vegetable, there are many layers of encryption on the network.

It was originally designed by the US Naval Research Laboratory, and continues to receive funding from the US State Department.

It attempts to hide a person’s location and identity by sending data across the internet via a very circuitous route involving several “nodes” – which, in this context, means using volunteers’ PCs and computer servers as connection points.

Encryption applied at each hop along this route makes it very hard to connect a person to any particular activity.

Source: BBC News launches ‘dark web’ Tor mirror – BBC News

Junior minister says gov.UK considering facial recognition to verify age of p0rn-watchers

The UK government could use facial recognition to verify the age of Brits online “so long as there is an appropriate concern for privacy,” junior minister for Digital, Culture, Media and Sport Matt Warman said.

The minister was responding to an urgent Parliamentary question directed to Culture Secretary Nicky Morgan about the future of Blighty’s online age-verification system, following her announcement this week that the controversial project had been dropped. He indicated the government is still keen to shield kids from adult material online, one way or another.

“In many ways, this is a technology problem that requires a technology solution,” Warman told the House of Commons on Thursday.

“People have talked about whether facial recognition could be used to verify age, so long as there is an appropriate concern for privacy. All of these are things I hope we will be able to wrap up in the new approach, because they will deliver better results for consumers – child or adult alike.”

The government also managed to spend £2.2m on the aforementioned-and-now-shelved proposal to introduce age-verification checks on netizens viewing online pornography, Warman admitted in his response.

Source: Junior minister says gov.UK considering facial recognition to verify age of p0rn-watchers • The Register