The nation’s largest private equity firm is interested in buying your DNA data. The going rate: $261 per person. That appears to be what Blackstone, the $63 billion private equity giant, is willing to pay for genetic data controlled by one of the major companies gathering it from millions of customers.
Earlier this week, Blackstone announced it was paying $4.7 billion to acquire Ancestry.com, a pioneer in pop genetics that was launched in the 1990s to help people find out more about their family heritage.
Ancestry’s customers get an at-home DNA kit that they send back to the company. Ancestry then adds that DNA information to its database and sends its users a report about their likely family history. The company will also match you to other family members in its system, including distant cousins you may or may not want to hear from. And for up to $400 a year, you can continue to search Ancestry’s database to add to your knowledge of your family tree.
Ancestry has some information, mostly collected from public databases, on hundreds of millions of individuals. But its most valuable information is that of the people who have taken its DNA tests, which totals 18 million. And at Blackstone’s $4.7 billion purchase price that translates to just over $250 each.
In an attempt to correct the perception of a small but very vocal minority that claims Facebook’s silencing conservative voices on its platforms, the company’s reportedly swung too far in the opposite direction and essentially gave a free pass to conservative pages to spew their bullshit online.
According to leaked documents reviewed by NBC, Facebook relaxed its fact-checking rules for conservative news outlets and personalities, including Breitbart and former Fox News stooges Diamond and Silk, so that they wouldn’t be penalized for spreading misinformation. This report comes just a day after a Buzzfeed exposé detailing how a Facebook employee was allegedly fired after collecting evidence of this preferential treatment of right-wing pages.
Per its standards, Facebook issues strikes to pages that have repeatedly spread inaccurate or misleading information as determined by the company’s millions of fact-checking partners (news outlets, politicians, influencers, etc.). If an account receives two strikes in a 90-day period, it receives a “repeat offender” status and can be shadowbanned or even temporarily lose advertising privileges. Facebook employees work with fact-checking partners to triage these misinformation flags, with high-priority issues receiving an “escalation” tag that then pushes them on to company higher-ups for review.
According to an archive of these escalations with the last six months that was leaked to NBC, Facebook employees in the misinformation escalations team waived strikes issued to some conservative pages under direct oversight from senior leadership. Roughly two-thirds of the cases listed concerned conservative pages, including those of Donald Trump Jr., Eric Trump, and Gateway Pundit.
An odd piece of news if not propoganda considering the big tech companies were slammed during their hearings buy the conspiracy seeing anti-vaxxer senators in the room
Your Google Home speaker may have been quietly recording sounds around your house without your permission or authorization, it was revealed this week.
The Chocolate Factory admitted it had accidentally turned on a feature that allowed its voice-controlled AI-based assistant to activate and snoop on its surroundings. Normally, the device only starts actively listening in and making a note of what it hears after it has heard wake words, such as “Ok, Google” or “Hey, Google,” for privacy reasons. Prior to waking, it’s constantly listening out for those words, but is not supposed to keep a record of what it hears.
Yet punters noticed their Google Homes had been recording random sounds, without any wake word uttered, when they started receiving notifications on their phone that showed the device had heard things like a smoke alarm beeping, or glass breaking in their homes – all without giving their approval.
Google said the feature had been accidentally turned on during a recent software update, and it has now been switched off, Protocol reported. It may be that this feature is or was intended to be used for home security at some point: imagine the assistant waking up whenever it hears a break in, for instance. Google just bought a $450m, or 6.6 per cent, stake in anti-burglary giant ADT.
A German court has sided with Google and rejected requests to wipe entries from search results. The cases hinged on whether the right to be forgotten outweighed the public’s right to know.
Germany’s highest court agreed on Monday with lower courts and rejected the two plaintiffs’ appeals over privacy concerns.
In the first case, a former managing director of a charity had demanded Google remove links to certain news articles that appeared in searches of his name. The articles from 2011 reported that the charity was in financial trouble and that the manager had called in sick. He later argued in court that information on his personal health issues should not be divulged to the public years later.
The court ruled that whether links to critical articles have to be removed from the search list always depends on a comprehensive consideration of fundamental rights in the individual case.
A second case was referred to the European Court of Justice. It concerned two leaders of a financial services company that sought to have links to negative reports about their investment model removed. The couple had argued that the US-based websites, which came up in the searches for their names, were full of fake news and sought to market other financial services providers.
[…]
Links are only be deleted from searches in Europe but would appear as normal in other regions. Any data “forgotten” by Google, which mostly provides links to material published by others, is only removed from its search results, not from the internet.
The cases stem from a 2014 ruling in the European Court of Justice (ECJ), which found that EU citizens had the right to request search engines, such as Alphabet’s Google and Microsoft’s Bing, remove “inaccurate, inadequate, irrelevant or excessive” search results linked to their name. The case centered on a Spaniard who found that when his name was Googled, it returned links to an advertisement for a property auction related to an unpaid social welfare debt. He argued the debt had long since been settled.
YouTube is embroiled in a very public spat with songwriters and music publishers in Denmark, via local collection society Koda.
According to Koda – Denmark’s equivalent of ASCAP/BMI (US) or PRS For Music (UK) – YouTube has threatened to remove “Danish music content” (ie. music written by Danish songwriters) from its service.
The cause of this threat is a disagreement between the two parties over the remuneration of songwriters and publishers in the market.
YouTube and Koda’s last multi-year licensing deal expired in April. Since then, the two parties have been operating under a temporary license agreement.
At the same time, Polaris, the umbrella body for collection societies in the Nordics, has been negotiating with YouTube over a new Scandinavia-wide licensing agreement.
But in a statement to media today (July 31), Koda claims YouTube is insisting that – in order to extend its temporary deal in Denmark – Koda must now agree to a near-70% reduction in payments to composers and songwriters.
YouTube has fired back at this claim, suggesting that under its existing temporary deal with Koda (which expires today), the body “earned back less than half of the guarantee payments” handed over by the service.
[…] wait – how on earth does a guarantee payment relate to the amount you renumerate people?
In response to Koda’s refusal to agree to YouTube’s proposed deal, Koda claims that “on the evening of Thursday 30 July, Google announced that they will soon remove all Danish music content on YouTube”.
Reports out of Denmark suggest YouTube may pull the plug on this content as soon as this Saturday.
[…]
“While we’ve had productive conversations we have been unable to secure a fair and equitable agreement before our existing one expired. They are asking for substantially more than what we pay our other partners. This is not only unfair to our other YouTube partners and creators, it is unhealthy for the wider economics of our industry.
“Without a new license, we’re unable to make their content available in Denmark. Our doors remain open to Koda to bring their content back to YouTube.”
YouTube added in a statement to MBW: “We take copyright law very seriously. As our license expires today and since we have been unable to secure an agreement we will remove identified Koda content from the platform.”
Koda says it “cannot accept” YouTube’s terms, and that as a result “Google have now unilaterally decided that Koda’s members cannot have their content shown on YouTube”.
[…]
Koda’s media director, Kaare Struve, said: “Google have always taken an ‘our way or the highway’ approach, but even for Google, this is a low point.
“Of course, Google know that they can create enormous frustration among our members by denying them access to YouTube – and among the many Danes who use YouTube every day.
“We can only suppose that by doing so, YouTube hope to be able to push through an agreement, one where they alone dictate all terms.”
Koda says that ever since its first agreement with YouTube was signed in 2013, “the level of payments received from YouTube has been significantly lower than the level of payment [distributed] by subscription-based services”.
Koda’s CEO, Gorm Arildsen, said: “It is no secret that our members have been very dissatisfied with the level of payment received for the use of their music on YouTube for many years now. And it’s no secret that we at Koda have actively advocated putting an end to the tech giants’ free-ride approach and underpayment for artistic content in connection with the EU’s new Copyright Directive.
“The fact that Google now demands that the payments due from them should be reduced by almost 70% in connection with a temporary contract extension seems quite bizarre.”
Well guys, I reccommend you move over to Vimeo. At least that way you’re helping to break the monopoly. Not that I believe in the slightest that Koda is working in the best interests of artists as much as it’s filling its’ own pockets, but there you go.
Yesterday, the Internet Archive filed our response to the lawsuit brought by four commercial publishers to end the practice of Controlled Digital Lending (CDL), the digital equivalent of traditional library lending. CDL is a respectful and secure way to bring the breadth of our library collections to digital learners. Commercial ebooks, while useful, only cover a small fraction of the books in our libraries. As we launch into a fall semester that is largely remote, we must offer our students the best information to learn from—collections that were purchased over centuries and are now being digitized. What is at stake with this lawsuit? Every digital learner’s access to library books. That is why the Internet Archive is standing up to defend the rights of hundreds of libraries that are using Controlled Digital Lending.
The publishers’ lawsuit aims to stop the longstanding and widespread library practice of Controlled Digital Lending, and stop the hundreds of libraries using this system from providing their patrons with digital books. Through CDL, libraries lend a digitized version of the physical books they have acquired as long as the physical copy doesn’t circulate and the digital files are protected from redistribution. This is how Internet Archive’s lending library works, and has for more than nine years. Publishers are seeking to shut this library down, claiming copyright law does not allow it. Our response is simple: Copyright law does not stand in the way of libraries’ rights to own books, to digitize their books, and to lend those books to patrons in a controlled way.
“The Authors Alliance has several thousand members around the world and we have endorsed the Controlled Digital Lending as a fair use,” stated Pamela Samuelson, Authors Alliance founder and Richard M. Sherman Distinguished Professor of Law at Berkeley Law. “It’s really tragic that at this time of pandemic that the publishers would try to basically cut off even access to a digital public library like the Internet Archive…I think that the idea that lending a book is illegal is just wrong.”
These publishers clearly intend this lawsuit to have a chilling effect on Controlled Digital Lending at a moment in time when it can benefit digital learners the most. For students and educators, the 2020 fall semester will be unlike any other in recent history. From K-12 schools to universities, many institutions have already announced they will keep campuses closed or severely limit access to communal spaces and materials such as books because of public health concerns. The conversation we must be having is: how will those students, instructors and researchers access information — from textbooks to primary sources? Unfortunately, four of the world’s largest book publishers seem intent on undermining both libraries’ missions and our attempts to keep educational systems operational during a global health crisis.
The publishers’ lawsuit does not stop at seeking to end the practice of Controlled Digital Lending. These publishers call for the destruction of the 1.5 million digital books that Internet Archive makes available to our patrons. This form of digital book burning is unprecedented and unfairly disadvantages people with print disabilities. For the blind, ebooks are a lifeline, yet less than one in ten exists in accessible formats. Since 2010, Internet Archive has made our lending library available to the blind and print disabled community, in addition to sighted users. If the publishers are successful with their lawsuit, more than a million of those books would be deleted from the Internet’s digital shelves forever.
I call on the executives at Hachette, HarperCollins, Wiley, and Penguin Random House to come together with us to help solve the pressing challenges to access to knowledge during this pandemic. Please drop this needless lawsuit.
The Trump administration is to pull federal paramilitaries out of Portland starting on Thursday in a major reversal after weeks of escalating protests and violence.
Oregon’s governor, Kate Brown, said she agreed to the pullout in talks with Vice-President Mike Pence.
Brown said state and city police officers will replace Department of Homeland Security agents in guarding the federal courthouse that has become the flashpoint for the protests.
“These federal officers have acted as an occupying force, refused accountability, and brought violence and strife to our community,” the governor said. The head of the US homeland security department said agents would stay near the courthouse until they were sure the plan was working.
Donald Trump said the pullout will not begin until the courthouse is protected. “We’re not leaving until they secure their city. We told the governor, we told the mayor: secure your city,” said the president.
But the announcement is a significant retreat by the administration after Trump sent federal forces to Portland at the beginning of July to end months of Black Lives Matter protests he described as having dragged the city into anarchy.
Instead of quelling the unrest, the arrival of paramilitaries fuelled some of the biggest demonstrations since daily protests following the killing of George Floyd, a Black American, by a white police officer in Minneapolis in May.
The situation escalated particularly after agents in camouflage were filmed snatching protesters from the streets in unmarked vans.
1:57
Portland protests: why Trump has sent in federal agents – video report
Far from imposing order, the federal force, drawn from the border patrol, immigration service and US Marshals, was largely trapped inside the federal courthouse they were ostensibly there to protect, emerging each night to fire waves of teargas, baton rounds and stun grenades in street battles with the protesters. But the demonstrators retained ultimate control of the streets.
Anger at the presence of the paramilitaries brought thousands of people out each night and acted as a lightning rod for broader discontent with Trump, including over his chaotic and divisive handling of the coronavirus epidemic which has killed nearly 150,000 Americans and shows no signs of abating.
The Australian government has filed its second lawsuit against Google in less than a year over privacy concerns, this time alleging the tech giant misled Australian consumers in an attempt to gather information for targeted ads. The Australian Competition and Consumers Commission (ACCC), the country’s consumer watchdog, says Google didn’t obtain explicit consent from consumers to collect personal data, according to a statement.
The ACCC cites a 2016 change to Google’s policy in which the company began collecting data about Google account holders’ activity on non-Google sites. Previously, this data was collected by ad-serving technology company DoubleClick and was stored separately, not linked to users’ Google accounts. Google acquired DoubleClick in 2008, and the 2016 change to Google’s policy meant Google and DoubleClick’s data on consumers were combined. Google then used the beefed-up data to sell even more targeted advertising.
From June 2016 to December 2018, Google account holders were met with a pop-up that explained “optional features” to accounts regarding how the company collected their data. Consumers could click “I agree,” and Google would begin collecting a “wide range of personally identifiable information” from them, according to the ACCC. The lawsuit contends that the pop-up didn’t adequately explain what consumers were agreeing to.
“The ACCC considers that consumers effectively pay for Google’s services with their data, so this change introduced by Google increased the ‘price’ of Google’s services, without consumers’ knowledge,” said ACCC Chair Rod Sims. Had more consumers sufficiently understood Google’s change in policy, many may not have consented to it, according to the ACCC.
Google told the Associated Press it disagrees with the ACCC’s allegations, and says Google account holders had been asked to “consent via prominent and easy-to-understand notifications.” It’s unclear what penalty the ACCC is seeking with the lawsuit.
Last October, the ACCC sued Google claiming the company misled Android users about the ability to opt out of location tracking on phones and tablets. That case is headed to mediation next week, according to a February Computer World article.
you can get this functionality by downloading and installing a simple app from the Google Play Store: Access Dots. It’s free, it’s easy, and it helps you up your Android’s security game. I would almost call it a must-install for anyone, because it’s as unobtrusive as it is helpful.
Download and launch the app, and you’ll see one simple setting you have to enable. That’s all you have to do to fire up Access Dots’ basic functionality.
Screenshot: David Murphy
Well, that and tapping on the new “Access Dots” listing in your Accessibility settings, and then enabling the service there, too.
Screenshot: David Murphy
Head back to your Android’s Home screen and…you won’t see anything. Zilch. That’s the point. Pull up your Camera app, however, and you’ll see a big green icon appear in the upper-right corner of your device. Tap on your Google Assistant’s microphone icon, and you’ll see an orange dot; the same as what iOS 14 users see.
Screenshot: David Murphy
If you don’t like these colors, you can change them to whatever you want in Access Dots’ settings. You can even change the location of said dot, as well as its size. Tap on the little “History” icon in Access Dots’ main UI—you can’t miss it—and you’ll even be able to browse a log of which apps requested camera of microphone access and for how long they used it:
Though I’m not a huge fan of how many ads litter the Access Dots app, I respect someone’s need to make a little cash. You only see them when you launch the app. Otherwise, all you’ll see on your phone are those dots. That’s not a terrible trade-off, I’d say, given how much this simple security app can do.
A handful of Chrome users have sued Google, accusing the browser maker of collecting personal information despite their decision not to sync data stored in Chrome with a Google Account.
The lawsuit [PDF], filed on Monday in a US federal district court in San Jose, California, claimed Google promises not to collect personal information from Chrome users who choose not to sync their browser data with a Google Account but does so anyway.
“Google intentionally and unlawfully causes Chrome to record and send users’ personal information to Google regardless of whether a user elects to Sync or even has a Google account,” the complaint stated.
Filed on behalf of “unsynced” plaintiffs Patrick Calhoun, Elaine Crespo, Hadiyah Jackson and Claudia Kindler – all said to have stopped using Chrome and to wish to return to it, rather than use a different browser, once Google stops tracking unsynced users – the lawsuit cited the Chrome Privacy Notice.
Since 2016, that notice has promised, “You don’t need to provide any personal information to use Chrome.” And since 2019, it has said, “the personal information that Chrome stores won’t be sent to Google unless you choose to store that data in your Google Account by turning on sync,” with earlier versions offering variants on that wording.
Nonetheless, whether or not account synchronization has been enabled, it’s claimed, Google uses Chrome to collect IP addresses linked to user agent data, identifying cookies, unique browser identifiers called X-Client Data Headers, and browsing history. And it does so supposedly in violation of federal wiretap laws and state statutes.
Google then links that information with individuals and their devices, it’s claimed, through practices like cookie syncing, where cookies set in a third-party context get associated with cookies set in a first-party context.
“Cookie synching allows cooperating websites to learn each other’s cookie identification numbers for the same user,” the complaint says. “Once the cookie synching operation is complete, the two websites exchange information that they have collected and hold about a user, further making these cookies ‘Personal Information.'”
The litigants pointed to Google’s plan to phase out third-party cookies, and noted Google doesn’t need cookies due to the ability of its X-Client-Data Header to uniquely identify people.
Twitter contractors with high-level administrative access to accounts regularly abused their privileges to spy on celebrities including Beyoncé, including approximating their movements via internet protocol addresses, according to a report by Bloomberg.
Over 1,500 workers and contractors at Twitter who handle internal support requests and manage user accounts have high-level privileges that enable them to override user security settings and reset their accounts via Twitter’s backend, as well as view certain details of accounts like IP addresses, phone numbers, and email addresses.
[…]
Two of the former Twitter employees told Bloomberg that projects such as enhancing security of “the system that houses Twitter’s backup files or enhancing oversight of the system used to monitor contractor activity were, at times, shelved for engineering products designed to enhance revenue.” In the meantime, some of those with access (some of whom were contractors with Cognizant at up to six separate work sites) abused it to view details including IP addresses of users. Executives didn’t prioritize policing the internal support team, two of the former employees told Bloomberg, and at times Twitter security allegedly had trouble tracking misconduct due to sheer volume.
A system was in place to create access logs, but it could be fooled by simply creating bullshit support tickets that made the spying appear legitimate; two of the former employees told Bloomberg that from 2017 to 2018 members of the internal support team “made a kind of game out of” the workaround. The security risks inherent to granting access to so many people were reportedly brought up to the company’s board repeatedly from 2015-2019, but little changed.
This had consequences beyond the most recent hack. Last year, the Department of Justice announced charges against two former employees (a U.S. national and a Saudi citizen) that it accused of espionage on behalf of an individual close to Saudi Crown Prince Mohammed bin Salman. The DOJ alleged that the intent of the operation was to gain access to private information on political dissidents.
The EU has demanded that Google make major concessions relating to its $2.1 billion acquisition of fitness-tracking company Fitbit if the deal is to be allowed to proceed imminently, according to people with direct knowledge of the discussions.
Since it was announced last November, the acquisition has faced steep opposition from consumer groups and regulators, who have raised concerns over the effect of Google’s access to Fitbit’s health data on competition.
EU regulators now want the company to pledge that it will not use that information to “further enhance its search advantage” and that it will grant third parties equal access to it, these people said.
The move comes days after the EU regulators suffered a major blow in Luxembourg, losing a landmark case that would have forced Apple to pay back €14.3 billion in taxes to Ireland.
Brussels insiders said that a refusal by Google to comply with the new demands would probably result in a protracted investigation, adding that such a scenario could ultimately leave the EU at a disadvantage.
“It is like a poker game,” said a person following the case closely. “In a lengthy probe, the commission risks having fewer or no pledges and still having to clear the deal.”
They added that the discussions over the acquisition were “intense,” and there was no guarantee that any agreement between Brussels and Google would be reached.
Google had previously promised it would not use Fitbit’s health data to improve its own advertising, but according to Brussels insiders, the commitment was not sufficient to assuage the EU’s concerns nor those of US regulators also examining the deal.
Apple’s iOS 14 beta has proven surprisingly handy at sussing out what apps are snooping on your phone’s data. It ratted out LinkedIn, Reddit, and TikTok for secretly copying clipboard content earlier this month, and now Instagram’s in hot water after several users reported that their camera’s “in use” indicator stays on even when they’re just scrolling through their Instagram feed.
According to reportsshared on social media by users with the iOS 14 beta installed, the green “camera on” indicator would pop up when they used the app even when they weren’t taking photos or recording videos. If this sounds like deja vu, that’s because Instagram’s parent company, Facebook, had to fix a similar issue with its iOS app last year when users found their device’s camera would quietly activate in the background without their permission while using Facebook.
In an interview with the Verge, an Instagram spokesperson called this issue a bug that the company’s currently working to patch.
[…]
Even though iOS 14 is still in beta mode and its privacy features aren’t yet available to the general public, it’s already raised plenty of red flags about apps snooping on your data. Though TikTok, LinkedIn, and Reddit may have been the most high-profile examples, researchers Talal Haj Bakry and Tommy Mysk found more than 50 iOS apps quietly accessing users’ clipboards as well. And while there are certainly more malicious breaches of privacy, these kinds of discoveries are a worrying reminder about how much we risk every time we go online.
Facebook has agreed to pay a total of $650 million in a landmark class action lawsuit over the company’s unauthorized use of facial recognition, a new court filing shows.
The filing represents a revised settlement that increases the total payout by $100 million and comes after a federal judge balked at the original proposal on the grounds it did not adequately punish Facebook.
The settlement covers any Facebook user in Illinois whose picture appeared on the site after 2011. According to the new document, those users can each expect to receive between $200 and $400 depending on how many people file a claim.
The case represents one of the biggest payouts for privacy violations to date, and contrasts sharply with other settlements such as that for the notorious data breach at Equifax—for which victims are expected to received almost nothing.
The Facebook lawsuit came about as a result of a unique state law in Illinois, which obliges companies to get permission before using facial recognition technology on their customers.
The law has ensnared not just Facebook, but also the likes of Google and photo service Shutterfly. The companies had insisted in court that the law did not apply to their activities, and lobbied the Illinois legislature to rule they were exempt, but these efforts fell short.
The final Facebook settlement is likely to be approved later this year, meaning Illinois residents will be poised to collect a payout in 2021.
The judge overseeing the settlement rejected the initial proposal in June on the grounds that the Illinois law provides penalties of $5,000, meaning Facebook could have been obliged to pay $47 billion—an amount far exceeding what the company agreed to pay under the settlement.
“We are focused on settling as it is in the best interest of our community and our shareholders to move past this matter,” said a Facebook spokesperson.
Edelson PC, the law firm representing the plaintiffs, declined to comment on the revised deal.
Amazon claims it reviews the software created by third-party developers for its Alexa voice assistant platform, yet US academics were able to create more than 200 policy-violating Alexa Skills and get them certified.
In a paper [PDF] presented at the US Federal Trade Commission’s PrivacyCon 2020 event this week, Clemson University researchers Long Cheng, Christin Wilson, Song Liao, Jeffrey Alan Young, Daniel Dong, and Hongxin Hu describe the ineffectiveness of Amazon’s Skills approval process.
The researchers have also set up a website to present their findings.
Like Android and iOS apps, Alexa Skills have to be submitted for review before they’re available to be used with Amazon’s Alexa service. Also like Android and iOS, the Amazon’s review process sometimes misses rule-breaking code.
In the researchers’ test, sometimes was every time: The e-commerce giant’s review system granted approval for every one of 234 rule-flouting Skills submitted over a 12-month period.
“Surprisingly, the certification process is not implemented in a proper and effective manner, as opposed to what is claimed that ‘policy-violating skills will be rejected or suspended,'” the paper says. “Second, vulnerable skills exist in Amazon’s skills store, and thus users (children, in particular) are at risk when using [voice assistant] services.”
Amazon disputes some of the findings and suggests that the way the research was done skewed the results by removing rule-breaking Skills after certification, but before other systems like post-certification audits might have caught the offending voice assistant code.
The devil is in the details
Alexa hardware has been hijacked by security researchers for eavesdropping and the software on these devices poses similar security risks, but the research paper concerns itself specifically with content in Alexa Skills that violates Amazon’s rules.
Alexa content prohibitions include limitations on activities like collecting information from children, collecting health information, sexually explicit content, descriptions of graphic violence, self-harm instructions, references to Nazis or hate symbols, hate speech, the promotion drugs, terrorism, or other illegal activities, and so on.
Getting around these rules involved tactics like adding a counter to Skill code, so the app only starts spewing hate speech after several sessions. The paper cites a range of problems with the way Amazon reviews Skills, including inconsistencies where rejected content gets accepted after resubmission, vetting tools that can’t recognize cloned code submitted by multiple developer accounts, excessive trust in developers, and negligence in spotting data harvesting even when the violations are made obvious.
Amazon also does not require developers to re-certify their Skills if the backend code – run on developers’ servers – changes. It’s thus possible for Skills to turn malicious if the developer alters the backend code or an attacker compromises a well-intentioned developer’s server.
As part of the project, the researchers also examined 825 published Skills for kids that either had a privacy policy or a negative review. Among these, 52 had policy violations. Negative comments by users mention unexpected advertisements, inappropriate language, and efforts to collect personal information.
Georgia Tech researcher Mark Riedl didn’t expect that his machine learning model “Weird A.I. Yankovic,” which generates new rhyming lyrics for existing songs would cause any trouble. But it did.
On May 15, Reidl posted an AI-generated lyric video featuring the instrumental to Michael Jackson’s “Beat It.” It was taken down on July 14, Reidl tweeted, after Twitter received a Digital Millennium Copyright Act takedown notice for copyright infringement from the International Federation of the Phonographic Industry, which represents major and independent record companies.
“I am fairly convinced that my videos fall under fair use,” Riedl told Motherboard of his AI creation, which is obviously inspired by Weird Al’s parodies. Riedl said his other AI-generated lyric videos posted to Twitter have not been taken down.
Riedl has contested the takedown with Twitter but has not received a response. Twitter also did not respond to Motherboard’s request for comment.
The incident raises the question of what role machine learning plays when it comes to the already nuanced and complicated rules of fair use, which allows for the use of a copyrighted work in certain circumstances, including educational uses and as part of a “transformative” work. Fair use also protects parody in some circumstances.
Riedl, whose research focuses on the study of artificial intelligence and storytelling for entertainment, says the model was created as a personal project and outside his role at Georgia Tech. “Weird A.I. Yankovic generates alternative lyrics that match the rhyme and syllables schemes of existing songs. These alternative lyrics can then be sung to the original tune,” Riedl said. “Rhymes are chosen, and two neural networks, GPT-2 and XLNET, are then used to generate each line, word by word.”
Oddly enough, game publishers seem to be able to contest DMCA on YouTube in 20 minutes when they are at a convention. It’s like it’s not being applied fairly at all…
It wouldn’t be a virtual event without a few technical difficulties. Though I can’t imagine the media giants showcasing at San Diego Comic-Con’s online event were worried about copyright violations affecting their panels. Considering, you know, they’re the ones that own the copyright.
Of course, that’s exactly what happened.
On Thursday, ViacomCBS livestreamed an hour-long panel for this year’s virtual SDCC to showcase properties in its ever-expansive Star Trek universe such as Picard, Discovery, and the upcoming Star Trek: Lower Decks. The stream briefly went dark, however, after YouTube’s copyright bots flagged the stream and replaced it with a warning that read: “Video unavailable: This video contains content from CBS CID, who has blocked it on copyright grounds.”
The hiccup occurred as the cast and producers of Discovery performedan “enhanced” read-through of the show’s season 2 finale accompanied by sound effects and on-screen storyboards. Evidently, the video sounded enough like the real deal to trigger YouTube’s software, even if it was obvious from looking at the stream that it wasn’t pirated content.
It only took about 20 minutes for the feed to be restored, but the irony of CBS’s own panel running afoul of its copyright (even accidentally) was too good for audiences to gloss over. As noted by io9’s Beth Elderkin, a later Cartoon Network panel livestream was similarly pulled offline over a copyright claim from its parent company, Turner Broadcasting.
Mozilla says it’s working on fixing a bug in Firefox for Android that keeps the smartphone camera active even after users have moved the browser in the background or the phone screen was locked.
A Mozilla spokesperson told ZDNet in an email this week that a fix is expected for later this year in October.
The bug was first spotted and reported to Mozilla a year ago, in July 2019, by an employee of video delivery platform Appear TV.
The bug manifests when users chose to video stream from a website loaded in Firefox instead of a native app.
Mobile users often choose to stream from a mobile browser for privacy reasons, such as not wanting to install an intrusive app and grant it unfettered access to their smartphone’s data. Mobile browsers are better because they prevent websites from accessing smartphone data, keeping their data collection to a minimum.
The Appear TV developer noticed that Firefox video streams kept going, even in situations when they should have normally stopped.
While this raises issues with streams continuing to consume the user’s bandwidth, the bug was also deemed a major privacy issue as Firefox would continue to stream from the user’s device in situations where the user expected privacy by switching to another app or locking the device.
“From our analysis, a website is allowed to retain access to your camera or microphone whilst you’re using other apps, or even if the phone is locked,” a spokesperson for Traced, a privacy app, told ZDNet, after alerting us to the issue.
“While there are times you might want the microphone or video to keep working in the background, your camera should never record you when your phone is locked,” Traced added.
Starting today, there’s a VPN on the market from a company you trust. The Mozilla VPN (Virtual Private Network) is now available on Windows and Android devices. This fast and easy-to-use VPN service is brought to you by Mozilla, the makers of Firefox, and a trusted name in online consumer security and privacy services.
See for yourself how the Mozilla VPN works:
The first thing you may notice when you install the Mozilla VPN is how fast your browsing experience is. That’s because the Mozilla VPN is based on modern and lean technology, the WireGuard protocol’s 4,000 lines of code, is a fraction in size of legacy protocols used by other VPN service providers.
You will also see an easy-to-use and simple interface for anyone who is new to VPN, or those who want to set it and get onto the web.
With no long-term contracts required, the Mozilla VPN is available for just $4.99 USD per month and will initially be available in the United States, Canada, the United Kingdom, Singapore, Malaysia, and New Zealand, with plans to expand to other countries this Fall.
The European Union’s top court ruled Thursday that an agreement that allows big tech companies to transfer data to the United States is invalid, and that national regulators need to take tougher action to protect the privacy of users’ data.
The ruling does not mean an immediate halt to all data transfers outside the EU, as there is another legal mechanism that some companies can use. But it means that the scrutiny over data transfers will be ramped up and that the EU and U.S. may have to find a new system that guarantees that Europeans’ data is afforded the same privacy protection in the U.S. as it is in the EU.
The case began after former U.S. National Security Agency contractor Edward Snowden revealed in 2013 that the American government was snooping on people’s online data and communications. The revelations included detail on how Facebook gave U.S. security agencies access to the personal data of Europeans.
Austrian activist and law student Max Schrems that year filed a complaint against Facebook, which has its EU base in Ireland, arguing that personal data should not be sent to the U.S., as many companies do, because the data protection is not as strong as in Europe. The EU has some of the toughest data privacy rules under a system known as GDPR.
Google records what people are doing on hundreds of thousands of mobile apps even when they follow the company’s recommended settings for stopping such monitoring, a lawsuit seeking class action status alleged on Tuesday.
The data privacy lawsuit is the second filed in as many months against Google by the law firm Boies Schiller Flexner on behalf a handful of individual consumers.
[…]
The new complaint in a U.S. district court in San Jose accuses Google of violating federal wiretap law and California privacy law by logging what users are looking at in news, ride-hailing and other types of apps despite them having turned off “Web & App Activity” tracking in their Google account settings.
The lawsuit alleges the data collection happens through Google’s Firebase, a set of software popular among app makers for storing data, delivering notifications and ads, and tracking glitches and clicks. Firebase typically operates inside apps invisibly to consumers.
“Even when consumers follow Google’s own instructions and turn off ‘Web & App Activity’ tracking on their ‘Privacy Controls,’ Google nevertheless continues to intercept consumers’ app usage and app browsing communications and personal information,” the lawsuit contends.
Google uses some Firebase data to improve its products and personalize ads and other content for consumers, according to the lawsuit.
Reuters reported in March that U.S. antitrust investigators are looking into whether Google has unlawfully stifled competition in advertising and other businesses by effectively making Firebase unavoidable.
In its case last month, Boies Schiller Flexner accused Google of surreptitiously recording Chrome browser users’ activity even when they activated what Google calls Incognito mode. Google said it would fight the claim.
Most GDPR consent banner implementations are deliberately engineered to be difficult to use and are full of dark patterns that are illegal according to the law.
I wanted to find out how many visitors would engage with a GDPR banner if it were implemented properly and how many would grant consent to their information being collected and shared.
[…]
If you implement a proper GDPR consent banner, a vast majority of visitors will most probably decline to give you consent. 91% to be exact out of 19,000 visitors in my study.
What’s a proper and legal implementation of a GDPR banner?
It’s a banner that doesn’t take much space
It allows people to browse your site even when ignoring the banner
It’s a banner that allows visitors to say “no” just as easy as they can say “yes”
I’ve seen a lot of people — including those who are supporting the publishers’ legal attack on the Internet Archive — insist that they “support libraries,” but that the Internet Archive’s Open Library and National Emergency Library are “not libraries.” First off, they’re wrong. But, more importantly, it’s good to see actual librarians now coming out in support of the Internet Archive as well. The Association of Research Libraries has put out a statement asking publishers to drop this counter productive lawsuit, especially since the Internet Archive has shut down the National Emergency Library.
The Association of Research Libraries (ARL) urges an end to the lawsuit against the Internet Archive filed early this month by four major publishers in the United States District Court Southern District of New York, especially now that the National Emergency Library (NEL) has closed two weeks earlier than originally planned.
As the ARL points out, the Internet Archive has been an astounding “force for good” for the dissemination of knowledge and culture — and that includes introducing people to more books.
For nearly 25 years, the Internet Archive (IA) has been a force for good by capturing the world’s knowledge and providing barrier-free access for everyone, contributing services to higher education and the public, including the Wayback Machine that archives the World Wide Web, as well as a host of other services preserving software, audio files, special collections, and more. Over the past four weeks, IA’s Open Library has circulated more than 400,000 digital books without any user cost—including out-of-copyright works, university press titles, and recent works of academic interest—using controlled digital lending (CDL). CDL is a practice whereby libraries lend temporary digital copies of print books they own in a one-to-one ratio of “loaned to owned,” and where the print copy is removed from circulation while the digital copy is in use. CDL is a practice rooted in the fair use right of the US Copyright Act and recent judicial interpretations of that right. During the COVID-19 pandemic, many academic and research libraries have relied on CDL (including IA’s Open Library) to ensure academic and research continuity at a time when many physical collections have been inaccessible.
As ARL and our partner library associations acknowledge, many publishers (including some involved in the lawsuit) are contributing to academic continuity by opening more content during this crisis. As universities and libraries work to ensure scholars and students have the information they need, ARL looks forward to working with publishers to ensure open and equitable access to information. Continuing the litigation against IA for the purpose of recovering statutory damages and shuttering the Open Library would interfere with this shared mutual objective.
It would be nice if the publishers recognized this, but as we’ve said over and over again, these publishers would sue any library if libraries didn’t already exist. The fact that the Open Library looks just marginally different from a traditional library, means they’re unlikely to let go of this stupid, counterproductive lawsuit.
As Alexa, Google Home, Siri, and other voice assistants have become fixtures in millions of homes, privacy advocates have grown concerned that their near-constant listening to nearby conversations could pose more risk than benefit to users. New research suggests the privacy threat may be greater than previously thought.
The findings demonstrate how common it is for dialog in TV shows and other sources to produce false triggers that cause the devices to turn on, sometimes sending nearby sounds to Amazon, Apple, Google, or other manufacturers. In all, researchers uncovered more than 1,000 word sequences—including those from Game of Thrones, Modern Family, House of Cards, and news broadcasts—that incorrectly trigger the devices.
“The devices are intentionally programmed in a somewhat forgiving manner, because they are supposed to be able to understand their humans,” one of the researchers, Dorothea Kolossa, said. “Therefore, they are more likely to start up once too often rather than not at all.”
That which must not be said
Examples of words or word sequences that provide false triggers include
Alexa: “unacceptable,” “election,” and “a letter”
Google Home: “OK, cool,” and “Okay, who is reading”
Siri: “a city” and “hey jerry”
Microsoft Cortana: “Montana”
The two videos below show a GoT character saying “a letter” and Modern Family character uttering “hey Jerry” and activating Alexa and Siri, respectively.
Accidental Trigger #1 – Alexa – Cloud
Accidental Trigger #3 – Hey Siri – Cloud
In both cases, the phrases activate the device locally, where algorithms analyze the phrases; after mistakenly concluding that these are likely a wake word, the devices then send the audio to remote servers where more robust checking mechanisms also mistake the words for wake terms. In other cases, the words or phrases trick only the local wake word detection but not algorithms in the cloud.
Unacceptable privacy intrusion
When devices wake, the researchers said, they record a portion of what’s said and transmit it to the manufacturer. The audio may then be transcribed and checked by employees in an attempt to improve word recognition. The result: fragments of potentially private conversations can end up in the company logs.
The research paper, titled “Unacceptable, where is my privacy?,” is the product of Lea Schönherr, Maximilian Golla, Jan Wiele, Thorsten Eisenhofer, Dorothea Kolossa, and Thorsten Holz of Ruhr University Bochum and Max Planck Institute for Security and Privacy. In a brief write-up of the findings, they wrote:
Our setup was able to identify more than 1,000 sequences that incorrectly trigger smart speakers. For example, we found that depending on the pronunciation, «Alexa» reacts to the words “unacceptable” and “election,” while «Google» often triggers to “OK, cool.” «Siri» can be fooled by “a city,” «Cortana» by “Montana,” «Computer» by “Peter,” «Amazon» by “and the zone,” and «Echo» by “tobacco.” See videos with examples of such accidental triggers here.
In our paper, we analyze a diverse set of audio sources, explore gender and language biases, and measure the reproducibility of the identified triggers. To better understand accidental triggers, we describe a method to craft them artificially. By reverse-engineering the communication channel of an Amazon Echo, we are able to provide novel insights on how commercial companies deal with such problematic triggers in practice. Finally, we analyze the privacy implications of accidental triggers and discuss potential mechanisms to improve the privacy of smart speakers.
The researchers analyzed voice assistants from Amazon, Apple, Google, Microsoft, and Deutsche Telekom, as well as three Chinese models by Xiaomi, Baidu, and Tencent. Results published on Tuesday focused on the first four. Representatives from Apple, Google, and Microsoft didn’t immediately respond to a request for comment.
The full paper hasn’t yet been published, and the researchers declined to provide a copy ahead of schedule. The general findings, however, already provide further evidence that voice assistants can intrude on users’ privacy even when people don’t think their devices are listening. For those concerned about the issue, it may make sense to keep voice assistants unplugged, turned off, or blocked from listening except when needed—or to forgo using them at all.
How many government demands for user data has Zoom received? We won’t know until “later this year,” an updated Zoom blog post now says.
The video conferencing giant previously said it would release the number of government demands it has received by June 30. But the company said it’s missed that target and has given no firm new date for releasing the figures.
It comes amid heightened scrutiny of the service after a number of security issues and privacy concerns came to light following a massive spike in its user base, thanks to millions working from home because of the coronavirus pandemic.
In a blog post today reflecting on the company’s turnaround efforts, chief executive Eric Yuan said the company has “made significant progress defining the framework and approach for a transparency report that details information related to requests Zoom receives for data, records or content.”
“We look forward to providing the fiscal [second quarter] data in our first report later this year,” he said.
Transparency reports offer rare insights into the number of demands or requests a company gets from the government for user data. These reports are not mandatory, but are important to understand the scale and scope of government surveillance.
Zoom said last month it would launch its first transparency report after the company admitted it briefly suspended the Zoom accounts of two U.S.-based accounts and one Hong Kong activist at the request of the Chinese government. The users, who were not based in China, held a Zoom call commemorating the anniversary of the Tiananmen Square massacre, an event that’s cloaked in secrecy and censorship in mainland China.