The Linkielist

Linking ideas with the world

The Linkielist

Alias Privacy “Parasite” 2.0 Adds a Layer of Security to Your Home Assistant

Alias is a teachable “parasite” that gives you more control over your smart assistant’s customization and privacy. Through a simple app, you can train Alias to react to a self-chosen wake-word; once trained, Alias takes control over your home assistant by activating it for you. When you’re not using it, Alias makes sure the assistant is paralyzed and unable to listen to your conversations.

When placed on top of your home assistant, Alias uses two small speakers to interrupt the assistant’s listening with a constant low noise that feeds directly into the microphone of the assistant. When Alias recognizes your user-created wake-word (e.g., “Hey Alias” or “Jarvis” or whatever), it stops the noise and quietly activates the assistant by speaking the original wake-word (e.g., “Alexa” or “Hey Google”).

From here the assistant can be used as normal. Your wake-word is detected by a small neural network program that runs locally on Alias, so the sounds of your home are not uploaded to anyone’s cloud.

Source: Alias Privacy “Parasite” 2.0 Adds a Layer of Security to Your Home Assistant | Make:

Top Streamers Are Leaving Twitch Amidst Big Money And Shady Deals

Let’s say you’re an up-and-coming streamer. You’ve done it for a while and you make decent money, although you’re no Tyler “Ninja” Blevins. But you’re on your way there, or so you hope. A while back, you got the opportunity to sign with an agency that promised to help you set up deals to advertise brands on your streams. Today, that’s finally paying off. The agency calls you to offer a $10,000 deal. You don’t think twice. That’s a handsome chunk of change. Time to pop a bottle of champagne and celebrate. There’s just one problem. Turns out the agency pocketed $90,000.

The above hypothetical scenario is based on a true story told by former CEO of esports organization CLG and current CMO of streaming company N3rdfusion Devin Nash, who opted to keep the streamer and agency’s identities anonymous. According to Nash’s story, which echoes others that Kotaku heard in the course of reporting, the initial deal was $100,000 for a single streamer to represent a big brand. But the agency was in full control of negotiations, so it just conveniently omitted the part about the remaining $90,000, because hey, $10,000 sounds pretty good in isolation, right? So the agency drew up a limited partnership agreement, and that was that. Nash went on to tell Kotaku that the streamer didn’t even get to keep the full $10,000.

“[The agency] also took the ten percent they had contractually,” Nash said in a Discord voice call. “So they took $1,000 and also pocketed the $90,000. They made $91,000, the streamer made $9,000, and nobody was the wiser.”

Streaming is big business now, and that means big money. But it also means that the world of streaming is transforming, and streamers are having to learn on the fly how to do more than just entertain. They’re having to strike deals with companies, agencies, and now entire platforms. Toward the end of last year, the deals grew bigger than ever, with blue-haired Fortnite megastar Tyler “Ninja” Blevins jumping ship from Twitch to Microsoft-owned streaming platform Mixer in a high-profile exclusivity deal that was soon followed by countless others. The business of video game streaming is rapidly evolving into something that echoes Hollywood, with agents and managers negotiating on behalf of streamers who are increasingly treated like actors or TV shows, and who wind up on platforms that stand in for more traditional networks.

Source: Top Streamers Are Leaving Twitch Amidst Big Money And Shady Deals

There is much much more to this article under the link

NSF’s newest solar telescope produces first images, most detailed images of the sun

This first images from NSF’s Inouye Solar Telescope show a close-up view of the sun’s surface, which can provide important detail for scientists. The image shows a pattern of turbulent “boiling” plasma that covers the entire sun. The cell-like structures—each about the size of Texas—are the signature of violent motions that transport heat from the inside of the sun to its surface. That hot solar plasma rises in the bright centers of “cells,” cools off and then sinks below the surface in dark lanes in a process known as convection. (See video available with this news release.)

Solar magnetic fields constantly get twisted and tangled by the motions of the sun’s plasma. Twisted magnetic fields can lead to solar storms that can negatively affect our technology-dependent modern lifestyles. During 2017’s Hurricane Irma, the National Oceanic and Atmospheric Administration reported that a simultaneous space weather event brought down radio communications used by first responders, aviation and maritime channels for eight hours on the day the hurricane made landfall.

Finally resolving these tiny magnetic features is central to what makes the Inouye Solar Telescope unique. It can measure and characterize the sun’s magnetic field in more detail than ever seen before and determine the causes of potentially harmful solar activity.

“It’s all about the magnetic field,” said Thomas Rimmele, director of the Inouye Solar Telescope. “To unravel the sun’s biggest mysteries, we have to not only be able to clearly see these tiny structures from 93 million miles away but very precisely measure their strength and direction near the surface and trace the field as it extends out into the million-degree corona, the outer atmosphere of the sun.”

Better understanding the origins of potential disasters will enable governments and utilities to better prepare for inevitable future space weather events. It is expected that notification of potential impacts could occur earlier—as much as 48 hours ahead of time instead of the current standard, which is about 48 minutes. This would allow for more time to secure power grids and critical infrastructure and to put satellites into safe mode.

he Inouye Solar Telescope combines a 13-foot (4-meter) mirror—the world’s largest for a —with unparalleled viewing conditions at the 10,000-foot Haleakalā summit.

Focusing 13 kilowatts of solar power generates enormous amounts of heat—heat that must be contained or removed. A specialized cooling system provides crucial heat protection for the telescope and its optics. More than seven miles of piping distribute coolant throughout the observatory, partially chilled by ice created on site during the night.

00:00
00:00
The Daniel K. Inouye Solar Telescope has produced the highest resolution observations of the sun’s surface ever taken. In this movie, taken at a wavelength of 705 nanometers (nm) over a period of 10 minutes, we can see features as small as 30km (18 miles) in size for the first time ever. The movie shows the turbulent, Credit: NSO/AURA/NSF

The dome enclosing the telescope is covered by thin cooling plates that stabilize the temperature around the telescope, helped by shutters within the dome that provide shade and air circulation. The “heat-stop” (a high-tech, liquid-cooled metal donut) blocks most of the sunlight’s energy from the main mirror, allowing scientists to study specific regions of the sun with unparalleled clarity.

[…]

“This image is just the beginning,” said David Boboltz, program director in NSF’s division of astronomical sciences and who oversees the facility’s construction and operations. “Over the next six months, the Inouye telescope’s team of scientists, engineers and technicians will continue testing and commissioning the telescope to make it ready for use by the international solar scientific community. The Inouye Solar Telescope will collect more information about our sun during the first 5 years of its lifetime than all the solar data gathered since Galileo first pointed a telescope at the sun in 1612.”

Source: NSF’s newest solar telescope produces first images, most detailed images of the sun

Don’t use online DNA tests! If You Ever Used Promethease, Your DNA Data Might Be on MyHeritage – and so will your family’s

When it comes to ways to learn about your DNA, Promethease’s service seemed like one of the safest. They promised anonymity, and to delete your report after 45 days. But now that MyHeritage has bought the company, users are being notified that their DNA data is now on MyHeritage. Wait, what?

It turns out that even though Promethease deleted reports as promised after 45 days, if you created an account, the service held onto your raw data. You now have a MyHeritage account, which you can delete if you like. Check your email. That’s how I found out about mine.

What Promethease does

A while back, I downloaded my raw data from 23andme and gave it to Promethease to find out what interesting things might be in my DNA. Ever since 23andme stopped providing detailed health-related results in 2013, Promethease was a sensible alternative. They used to charge $5 (now up to $12, but that’s still a steal) and they didn’t attempt to explain your results to you. Instead, you could just see what SNPs you had—those are spots where your DNA differs from other people’s—and read on SNPedia, a sort of genetics wikipedia, about what those SNPs might mea

So this means Promethease had access to the raw file you gave it (which you would have gotten from 23andme, Ancestry, or another service), and to the report of SNPs that it created for you. You had the option of paying your fee, downloading your report, and never dealing with the company again; or you could create an account so that you could “regenerate” your report in the future without having to pay again. That means they stored your raw DNA file.

Source: If You Ever Used Promethease, Your DNA Data Might Be on MyHeritage Now

Because your DNA contains information about your whole family, by uploading your DNA you also upload their DNA, making it a whole lot easier to de-anonymise their DNA. It’s a bit like uploading a picture of your family to Facebook with the public settings on and then tagging them, even though the other family members on your picture aren’t on Facebook.

UN didn’t patch SharePoint, got mega-hacked, covered it up, kept most staff in the dark, finally forced to admit it, accident waiting to happen

The United Nations’ European headquarters in Geneva and Vienna were hacked last summer, putting thousands of staff records at miscreants’ fingertips. Incredibly, the organization decided to cover it up without informing those affected nor the public.

[…]

A senior IT official dubbed the attack a “major meltdown,” in which personnel records – as well as contract data covering thousands of individuals and organizations – was accessed. The hackers were able to get into user-management systems and past firewalls; eventually compromising over 40 servers, with the vast majority at the European headquarters in Geneva.

But despite the size and extent of the hack, the UN decided to keep it secret. Only IT teams and the heads of the stations in question were informed.

[…]

Employees whose data was within reach of the hackers were told only that they needed to change their password and were not informed that their personal details had been compromised. That decision not to disclose any details stems from a “cover-up culture” the anonymous IT official who leaked the internal report told the publication.

The report notes it has been unable to calculate the extent of damage but one techie – it’s not clear it is the same one that leaked the report – estimated that 400GB had been pulled from United Nations servers.

Most worrying is the fact the UN Office of the High Commissioner for Human Rights (OHCHR) was one of those compromised. The OHCHR deals with highly sensitive information from people who put their lives at risk to uncover human rights abuses.

Making matters worse, IT specialists had warned the UN for years that it was at risk from hacking. An audit in 2012 identified an “unacceptable level of risk,” and resulted in a restructure that consolidated servers, websites, and typical services like email, and then outsourced them to commercial providers at a cost of $1.7bn.

But internal warnings about lax security continued, and an official audit in 2018 was full of red flags. “The performance management framework had not been implemented,” it stated, adding that there were “policy gaps in areas of emerging concern, such as the outsourcing of ICT services, end-user device usage, information-sharing, open data and the reuse and safe disposal of decommissioned ICT equipment.”

There were lengthy delays in security projects, and, internally, departments were ignoring compliance efforts. The audit “noted with concern” that 28 of the 37 internal groups hadn’t responded at all and that over the nearly 1,500 websites and web apps identified only a single one had carried out a security assessment.

The audit also found that less than half of the 38,105 staff had done a compulsory course in basic IT security that had been designed to help reduce overall security risks. In short, this was an accident waiting to happen, especially given the UN’s high-profile status.

As to the miscreants’ entry point, it was a known flaw in Microsoft SharePoint (CVE-2019-0604) for which a software patch had been available for months yet the UN had failed to apply it.

The hole can be exploited by a remote attacker to bypass logins and issue system-level commands – in other words, a big problem from a security standpoint. The hackers broke into a vulnerable SharePoint deployment in Vienna and then, with admin access, moved within the organization’s networks to access the Geneva headquarters and then the OHCHR.

[…]

Source: UN didn’t patch SharePoint, got mega-hacked, covered it up, kept most staff in the dark, finally forced to admit it • The Register

Lab-Grown Heart Muscles Have Been Transplanted Into a Human For The First Time

On Monday, researchers from Japan’s Osaka University announced the successful completion of a first-of-its-kind heart transplant.

Rather than replacing their patient’s entire heart with a new organ, these researchers placed degradable sheets containing heart muscle cells onto the heart’s damaged areas – and if the procedure has the desired effect, it could eventually eliminate the need for some entire heart transplants.

To grow the heart muscle cells, the team started with induced pluripotent stem (iPS) cells. These are stem cells that researchers create by taking an adult’s cells – often from their skin or blood – and reprogramming them back into their embryonic-like pluripotent state.

At that point, researchers can coax the iSP cells into becoming whatever kind of cell they’d like. In the case of this Japanese study, the researchers created heart muscle cells from the iSP cells before placing them on small sheets.

The patient who received the transplant suffers from ischemic cardiomyopathy, a condition in which a person’s heart has trouble pumping because its muscles don’t receive enough blood.

In severe cases, the condition can require a heart transplant, but the team from Osaka University hopes that the muscle cells on the sheet will secrete a protein that helps regenerate blood vessels, thereby improving the patient’s heart function.

The researchers plan to monitor the patient for the next year, and they hope to conduct the same procedure on nine other people suffering from the same condition within the next three years.

If all goes well, the procedure could become a much-needed alternative to heart transplants – not only is sourcing iPS cells far easier than finding a suitable donor heart, but a recipient’s immune system is more likely to tolerate the cells than a new organ.

Source: Lab-Grown Heart Muscles Have Been Transplanted Into a Human For The First Time

Swarm Drones Demonstrate Tactics to Conduct Urban Raid

In its third field experiment, DARPA’s OFFensive Swarm-Enabled Tactics (OFFSET) program deployed swarms of autonomous air and ground vehicles to demonstrate a raid in an urban area. The OFFSET program envisions swarms of up to 250 collaborative autonomous systems providing critical insights to small ground units in urban areas where limited sight lines and tight spaces can obscure hazards, as well as constrain mobility and communications.

In an interactive urban raid scenario, Swarm Systems Integrator teams deployed their assets in the air and on the ground to conduct the DARPA-designed mission, seeking multiple simulated items of interest located in the buildings at the Combined Arms Collective Training Facility (CACTF) at the Camp Shelby Joint Forces Training Center in Mississippi.

The initial phase of the OFFSET swarm’s mission is to gather intelligence about the urban area of operations. In the field experiment scenario, AprilTags – a type of 2D bar code often used in robotics – were placed on and in buildings and throughout the urban environment to represent items of interest requiring further investigation and/or hazards to avoid or render safe. As the swarm relayed information acquired from the tags, human swarm tacticians adaptively employed various swarm tactics their teams had developed to isolate and secure the building(s) containing the identified items. Concurrently, separate subswarms also were often tasked to maintain situational awareness and continue observation of the surrounding environment. The complex scenario is designed to inspire and incentivize such dynamic employment of large-scale heterogeneous robotic teams to carry out these diverse tasks.

OFFSET includes two main performer types: Swarm Systems Integrators and Swarm Sprinters. The integrators, Northrop Grumman and Raytheon BBN, create OFFSET architectures, interfaces, and their respective Swarm Tactics Exchanges, which house tools to help performers design tactics by composing collective behaviors, algorithms, and existing swarm tactics. The sprinters perform focused tasks and deliver additional technologies to merge with system integrators.

In the Camp Shelby experiment, Swarm Sprinters Charles River Analytics, Inc., Case Western University, and Northwestern University demonstrated the ability to integrate novel interactions and interface modalities for enhanced human-swarm teaming, which allows the human operator to use interactions such as gestures or haptic touch to direct the swarm. Carnegie Mellon University and Soar Technology incorporated their developments in operational swarm tactics, such as providing the swarm the capability to search and map a building or automate resource allocation.

“It has been fascinating to watch the Swarm Sprinters, who may not have been previously exposed to realistic operational settings, begin to understand why it’s so difficult to operate in dense, urban environments,” says Timothy Chung, the OFFSET program manager in DARPA’s Tactical Technology Office (TTO). “The Swarm Sprinters brought a number of novel technologies they have developed over the last 6-9 months and successfully integrated and tested their developments on physical platforms in real-world environments, which was exciting to see.”

Previous field experiments took place at the U.S. Army’s Camp Roberts in Paso Robles, California, and the Selby Combined Arms Collective Training Facility in Fort Benning, Georgia. Additional field experiments are targeted at six-month intervals.

More information about OFFSET and swarm sprint thrust areas is available on DARPA’s YouTube channel and website: https://youtu.be/c7KPBHPEMM0 and http://www.darpa.mil/work-with-us/offensive-swarm-enabled-tactics.

Source: OFFSET Swarm Systems Integrators Demonstrate Tactics to Conduct Urban Raid

In ‘Sophisticated’ Incident, Dozens of U.N. Servers Hacked including their active directory server

An internal confidential document from the United Nations, leaked to The New Humanitarian and seen by The Associated Press, says that dozens of servers were “compromised” at offices in Geneva and Vienna.

Those include the U.N. human rights office, which has often been a lightning rod of criticism from autocratic governments for its calling-out of rights abuses.

One U.N. official told the AP that the hack, which was first detected over the summer, appeared “sophisticated” and that the extent of the damage remains unclear, especially in terms of personal, secret or compromising information that may have been stolen. The official, who spoke only on condition of anonymity to speak freely about the episode, said systems have since been reinforced.

The level of sophistication was so high that it was possible a state-backed actor might have been behind it, the official said.

There were conflicting accounts about the significance of the incursion.

“We were hacked,” U.N. human rights office spokesman Rupert Colville. “We face daily attempts to get into our computer systems. This time, they managed, but it did not get very far. Nothing confidential was compromised.”

The breach, at least at the human rights office, appears to have been limited to the so-called active directory – including a staff list and details like e-mail addresses – but not access to passwords. No domain administration’s account was compromised, officials said.

The United Nations headquarters in New York as well as the U.N.’s sprawling Palais des Nations compound in Geneva, its European headquarters, did not immediately respond to questions from the AP about the incident.

Sensitive information at the human rights office about possible war criminals in the Syrian conflict and perpetrators of Myanmar’s crackdown against Rohingya Muslims were not compromised, because it is held in extremely secure conditions, the official said.

The internal document from the U.N. Office of Information and Technology said 42 servers were “compromised” and another 25 were deemed “suspicious,” nearly all at the sprawling United Nations offices in Geneva and Vienna. Three of the “compromised” servers belonged to the Office of the High Commissioner for Human Rights, which is located across town from the main U.N. office in Geneva, and two were used by the U.N. Economic Commission for Europe.

Technicians at the United Nations office in Geneva, the world body’s European hub, on at least two occasions worked through weekends in recent months to isolate the local U.N. data center from the Internet, re-write passwords and ensure the systems were clean.

The hack comes amid rising concerns about computer or mobile phone vulnerabilities, both for large organizations like governments and the U.N. as well as for individuals and businesses.

Source: In ‘Sophisticated’ Incident, Dozens of U.N. Servers Hacked | Time

They are downplaying the importance of an Active Directory server – it contains all the users and their details, so it’s a pretty big deal.

Social media scrapers Clearview Lied About Its Crime-Solving Power In Pitches To Law Enforcement Agencies

A very questionable facial recognition tool being offered to law enforcement was recently exposed by Kashmir Hill for the New York Times. Clearview — created by a developer previously best known for an app that let people put Trump’s “hair” on their own photos — is being pitched to law enforcement agencies as a better AI solution for all their “who TF is this guy” problems.

Clearview doesn’t limit itself to law enforcement databases — ones (partially) filled with known criminals and arrestees. Instead of using known quantities, Clearview scrapes the internet for people’s photos. With the click of an app button, officers are connected to Clearview’s stash of 3 billion photos pulled from public feeds on Twitter, LinkedIn, and Facebook.

Most of the scrapees have already objected to being scraped. While this may violate terms of service, it’s not completely settled that scraping content from public feeds is actually illegal. However, peeved companies can attempt to shut off their firehoses, which is what Twitter is in the process of doing.

Clearview has made some bold statements about its effectiveness — statements that haven’t been independently confirmed. Clearview did not submit its software to NIST’s recent roundup of facial recognition AI, but it most likely would not have fared well. Even more established software performed poorly, misidentifying minorities almost 100 times more often than it did white males.

The company claims it finds matches 75% of the time. That doesn’t actually mean it finds the right person 75% of the time. It only means the software finds someone that matches submitted photos three-quarters of the time. Clearview has provided no stats on its false positive rate. That hasn’t stopped it from lying about its software and its use by law enforcement agencies.

A BuzzFeed report based on public records requests and conversations with the law enforcement agencies says the company’s sales pitches are about 75% bullshit.

Clearview AI, a facial recognition company that says it’s amassed a database of billions of photos, has a fantastic selling point it offers up to police departments nationwide: It cracked a case of alleged terrorism in a New York City subway station last August in a matter of seconds. “How a Terrorism Suspect Was Instantly Identified With Clearview,” read the subject line of a November email sent to law enforcement agencies across all 50 states through a crime alert service, suggesting its technology was integral to the arrest.

Here’s what the NYPD had to say about Clearview’s claims in its marketing materials:

“The NYPD did not use Clearview technology to identify the suspect in the August 16th rice cooker incident,” a department spokesperson told BuzzFeed News. “The NYPD identified the suspect using the Department’s facial recognition practice where a still image from a surveillance video was compared to a pool of lawfully possessed arrest photos.”

The NYPD also said it had no “institutional relationship” with Clearview, contradicting the company’s sales pitch insinuations. The NYPD was not alone in its rejection of Clearview’s claims.

Clearview also claimed to be instrumental in apprehending a suspect wanted for assault. In reality, the suspect turned himself in to the NYPD. The PD again pointed out Clearview played no role in this investigation. It also had nothing to do with solving a subway groping case (the tip that resulted in an arrest was provided to the NYPD by the Guardian Angels) or an alleged “40 cold cases solved” by the NYPD.

The company says it is “working with” over 600 police departments. But BuzzFeed’s investigation has uncovered at least two cases where “working with” simply meant submitting a lead to a PD tip line. Most likely, this is only the tip of the iceberg. As more requested documents roll in, there’s a very good chance this “working with” BS won’t just be a two-off.

Clearview’s background appears to be as shady as its public claims. In addition to its founder’s links to far right groups (first uncovered by Kashmir Hill), its founder pumped up the company’s reputation by deploying a bunch of sock puppets.

Ton-That set up fake LinkedIn profiles to run ads about Clearview, boasting that police officers could search over 1 billion faces in less than a second.

These are definitely not the ethics you want to see from a company pitching dubious facial recognition software to law enforcement agencies. Some agencies may perform enough due diligence to move forward with a more trustworthy company, but others will be impressed with the lower cost and the massive amount of photos in Clearview’s database and move forward with unproven software created by a company that appears to be willing to exaggerate its ability to help cops catch crooks.

If it can’t tell the truth about its contribution to law enforcement agencies, it’s probably not telling the truth about the software’s effectiveness. If cops buy into Clearview’s PR pitches, the collateral damage will be innocent people’s freedom.

Source: Facial Recognition Company Clearview Lied About Its Crime-Solving Power In Pitches To Law Enforcement Agencies | Techdirt

MIDI 2.0 overhauls the music interface for the first time in 35 years

About 35 years after the MIDI 1.0 Detailed Specification was established, instrument manufacturers voted unanimously on January 18th to adopt the new MIDI 2.0 spec. So what’s changing for audio interfaces? The “biggest advance in music technology in decades” brings two-way communication, among many other new features while remaining backwards compatible with the old spec.

Companies like Roland, Native Instruments, Korg and Yamaha are part of the MIDI Manufacturers Association behind the update, and we’ve already seen Roland’s A-88MKII keyboard that will be ready for the spec when it goes on sale in March.

MIDI

And it’s about time for a new standard, while the 5-bit DIN cables used in the 1980s couldn’t handle high resolution audio, the MIDI 2.0 spec is ready for any digital connector you’d like to use, and will start by targeting USB ports. That allows for far more accurate timing, and far more resolution by upgrading messages from seven bits to as much as 32-bit.

It should also make instruments easier to use, with profiles that will automatically set up gear for its intended use and a feature called Property Exchange that uses JSON (JavaScript Object Notation) to send over more detailed configuration info. You’ll spend less time shuffling through presets and more time simply making music, plus some of these features can be used even on older MIDI 1.0-spec hardware. As Reverb.com notes, there’s still room for improvement on things like networking multiple devices, but it represents a massive upgrade over the old standard, and will be useful for anyone trying to make a Grammy-winning album, whether it’s in their bedroom or a fully-kitted studio.

Source: MIDI 2.0 overhauls the music interface for the first time in 35 years | Engadget

Mozilla moves to monetize Thunderbird, transfers project to new subsidiary

The Mozilla Foundation announced today that it was moving the Thunderbird email client to a new subsidiary named the MZLA Technologies Corporation.

Mozilla said that Thunderbird will continue to remain free and open source, but by moving the project away from its foundation into a corporate entity they will be able to monetize the product and pay for its development easier than before.

Currently, Thunderbird is primarily being kept alive through charitable donations from the product’s userbase.

“Moving to MZLA Technologies Corporation will not only allow the Thunderbird project more flexibility and agility, but will also allow us to explore offering our users products and services that were not possible under the Mozilla Foundation,” said Philipp Kewisch, Mozilla Product Manager.

“The move will allow the project to collect revenue through partnerships and non-charitable donations, which in turn can be used to cover the costs of new products and services,” Kewisch added.

Source: Mozilla moves to monetize Thunderbird, transfers project to new subsidiary | ZDNet

Google to translate and transcribe conversations in real time

Google on Tuesday unveiled a feature that’ll let people use their phones to both transcribe and translate a conversation in real time into a language that isn’t being spoken. The tool will be available for the Google Translate app in the coming months, said Bryan Lin, an engineer on the Translate team.

Right now the feature is being tested in several languages, including Spanish, German and French. Lin said the computing will take place on Google’s servers and not on people’s devices.

Source: Google to translate and transcribe conversations in real time – CNET

Clearview AI Told Cops To “Run Wild” With Its Creepy Face database, access given away without checks and sold to private firms despite claiming otherwise

Clearview AI, the facial recognition company that claims to have amassed a database of more than 3 billion photos scraped from Facebook, YouTube, and millions of other websites, is scrambling to deal with calls for bans from advocacy groups and legal threats. These troubles come after news reports exposed its questionable data practices and misleading statements about working with law enforcement.

Following stories published in the New York Times and BuzzFeed News, the Manhattan-based startup received cease-and-desist letters from Twitter and the New Jersey attorney general. It was also sued in Illinois in a case seeking class-action status.

Despite its legal woes, Clearview continues to contradict itself, according to documents obtained by BuzzFeed News that are inconsistent with what the company has told the public. In one example, the company, whose code of conduct states that law enforcement should only use its software for criminal investigations, encouraged officers to use it on their friends and family members.

“To have these technologies rolled out by police departments without civilian oversight really raises fundamental questions about democratic accountability,” Albert Fox Cahn, a fellow at New York University and the executive director of the Surveillance Technology Oversight Project, told BuzzFeed News.

In the aftermath of revelations about its technology, Clearview has tried to clean up its image by posting informational webpages, creating a blog, and trotting out surrogates for media interviews, including one in which an investor claimed Clearview was working with “over a thousand independent law enforcement agencies.” Previously, Clearview had stated that the number was around 600.

Clearview has also tried to allay concerns that its technology could be abused or used outside the scope of police investigations. In a code of conduct that the company published on its site earlier this month, it said its users should “only use the Services for law enforcement or security purposes that are authorized by their employer and conducted pursuant to their employment.”

It bolstered that idea with a blog post on Jan. 23, which stated, “While many people have advised us that a public version would be more profitable, we have rejected the idea.”

“Clearview exists to help law enforcement agencies solve the toughest cases, and our technology comes with strict guidelines and safeguards to ensure investigators use it for its intended purpose only,” the post stated.

But in a November email to a police lieutenant in Green Bay, Wisconsin, a company representative encouraged a police officer to use the software on himself and his acquaintances.

“Have you tried taking a selfie with Clearview yet?” the email read. “It’s the best way to quickly see the power of Clearview in real time. Try your friends or family. Or a celebrity like Joe Montana or George Clooney.

“Your Clearview account has unlimited searches. So feel free to run wild with your searches,” the email continued. The city of Green Bay would later agree on a $3,000 license with Clearview.

Via Obtained by BuzzFeed News

An email from Clearview to an officer in Green Bay, Wisconsin, from November 2019.

Hoan Ton-That, the CEO of Clearview, claimed in an email that the company has safeguards on its product.

“As as [sic] safeguard we have an administrative tool for Law Enforcement supervisors and administrators to monitor the searches of a particular department,” Ton-That said. “An administrator can revoke access to an account at any time for any inappropriate use.”

Clearview’s previous correspondence with Green Bay police appeared to contradict what Ton-That told BuzzFeed News. In emails obtained by BuzzFeed News, the company told officers that searches “are always private and never stored in our proprietary database, which is totally separate from the photos you search.”

“So feel free to run wild with your searches.”

“It’s certainly inconsistent to, on the one hand, claim that this is a law enforcement tool and that there are safeguards — and then to, on the other hand, recommend it being used on friends and family,” Clare Garvie, a senior associate at the Georgetown Law’s Center on Privacy and Technology, told BuzzFeed News.

Clearview has also previously instructed police to act in direct violation of the company’s code of conduct, which was outlined in a blog post on Monday. The post stated that law enforcement agencies were “required” to receive permission from a supervisor before creating accounts.

But in a September email sent to police in Green Bay, the company said there was an “Invite User” button in the Clearview app that can be used to give any officer access to the software. The email encouraged police officers to invite as many people as possible, noting that Clearview would give them a demo account “immediately.”

“Feel free to refer as many officers and investigators as you want,” the email said. “No limits. The more people searching, the more successes.”

“Rewarding loyal customers”

Despite its claim last week that it “exists to help law enforcement agencies,” Clearview has also been working with entities outside of law enforcement. Ton-That told BuzzFeed News on Jan. 23 that Clearview was working with “a handful of private companies who use it for security purposes.” Marketing emails from late last year obtained by BuzzFeed News via a public records request showed the startup aided a Georgia-based bank in a case involving the cashing of fraudulent checks.

Earlier this year, a company representative was slated to speak at a Las Vegas gambling conference about casinos’ use of facial recognition as a way of “rewarding loyal customers and enforcing necessary bans.” Initially, Jessica Medeiros Garrison, whose title was stated on the conference website as Clearview’s vice president of public affairs, was listed on a panel that included the head of surveillance for Las Vegas’ Cosmopolitan hotel. Later versions of the conference schedule and Garrison’s bio removed all mentions of Clearview AI. It is unclear if she actually appeared on the panel.

A company spokesperson said Garrison is “a valued member of the Clearview team” but declined to answer questions on any possible work with casinos.

Cease and desist

Clearview has also faced legal threats from private and government entities. Last week, Twitter sent the company a cease-and-desist letter, noting that its claim to have collected photos from its site was in violation of the social network’s terms of service.

“This type of use (scraping Twitter for people’s images/likeness) is not allowed,” a company spokesperson told BuzzFeed News. The company, which asked Clearview to cease scraping and delete all data collected from Twitter, pointed BuzzFeed News to a part of its developer policy, which states it does not allow its data to be used for facial recognition.

On Friday, Clearview received a similar note from the New Jersey attorney general, who called on state law enforcement agencies to stop using the software. The letter also told Clearview to stop using clips of New Jersey Attorney General Gurbir Grewal in a promotional video on its site that claimed that a New Jersey police department used the software in a child predator sting late last year.

[…]

Clearview declined to provide a list of law enforcement agencies that were on free trials or paid contracts, stating that it was more than 600.

“We do not have to be hidden”

That number is lower than what one of Clearview’s investors bragged about on Saturday. David Scalzo, an early investor in Clearview through his firm, Kirenaga Partners, claimed in an interview with Dilbert creator and podcaster Scott Adams that “over a thousand independent law enforcement agencies” were using the software. The investor went on to contradict the company’s public statement that it would not make its tool available to the public, stating “it is inevitable that this digital information will be out there” and “the best thing we can do is get this technology out to everyone.”

[…]

EPIC’s letter came after an Illinois resident sued Clearview in a state district court last Wednesday, alleging the software violated the Illinois Biometric Information Privacy Act by collecting the “identifiers and information” — like facial data gathered from photos accumulated from social media — without permission. Under the law, private companies are not allowed to “collect, capture, purchase,” or receive biometric information about a person without their consent.

The complaint, which also alleged that Clearview violated the constitutional rights of all Americans, asked for class-action recognition on behalf of all US citizens, as well as all Illinois residents whose biometric information was collected. When asked, Ton-That did not comment on the lawsuit.

In legal documents given to police, obtained by BuzzFeed News through a public records request, Clearview argued that it was not subject to states’ biometric data laws including those in Illinois. In a memo to the Atlanta Police Department, a lawyer for Clearview argued that because the company’s clients are public agencies, the use of the startup’s technology could not be regulated by state law, which only governs private entities.

Cahn, the executive director of the Surveillance Technology Oversight Project, said that it was “problematic” for Clearview AI to argue it wasn’t beholden to state biometric laws.

“Those laws regulate the commercial use of these sorts of tools, and the idea that somehow this isn’t a commercial application, simply because the customer is the government, makes no sense,” he said. “This is a company with private funders that will be profiting from the use of our information.”

Under the attention, Clearview added explanations to its site to deal with privacy concerns. It added an email link for people to ask questions about its privacy policy, saying that all requests will go to its data protection officer. When asked by BuzzFeed News, the company declined to name that official.

To process a request, however, Clearview is requesting more personal information: “Please submit name, a headshot and a photo of a government-issued ID to facilitate the processing of your request.“ The company declined to say how it would use that information.

Source: Clearview AI Once Told Cops To “Run Wild” With Its Facial Recognition Tool

Brave, Google, Microsoft, Mozilla gather together to talk web privacy… and why we all shouldn’t get too much of it. Only FF and Brave will give you some.

At the USENIX Enigma conference on Tuesday, representatives of four browser makers, Brave, Google, Microsoft, and Mozilla, gathered to banter about their respective approaches to online privacy, while urging people not to ask for too much of it.

Apple, which has advanced browser privacy standards but was recently informed that its tracking defenses can be used for er, tracking, was conspicuously absent, though it had a tongue-tied representative recruiting for privacy-oriented job positions at the show.

The browser-focused back-and-forth was mostly cordial as the software engineers representing their companies discussed notable privacy features in the various web browsers they worked on. They stressed the benefit of collaboration on web standards and the mutually beneficial effects of competition.

Eric Lawrence, program manager on the Microsoft Edge team, touched on how Microsoft has just jettisoned 25 years of Internet Explorer code to replatform Edge on the open source Chromium project, now the common foundation for 20 or so browsers.

Beside a slide that declared “Microsoft loves the Web,” Lawrence made the case for the new Edge as a modern browser with some well-designed privacy features, including Microsoft’s take on tracking protection, which blocks most trackers in its default setting and can be made more strict, at the potential cost of site compatibility.

A slide at Enigma 2020 saying Microsoft loves the Web;

Edge comes across as a reliable alternative to Chrome and should become more distinct as it evolves. It occupies a difficult space on the privacy continuum, in that it has some nice privacy features but not as many as Brave or Firefox. But Edge may find fans on the strength of the Microsoft brand since, as Lawrence emphasized, Microsoft is not new to privacy concerns.

That said, Microsoft is not far from Google in advocating not biting the hand that feeds the web ecosystem – advertising.

“The web doesn’t exist in a vacuum,” Lawrence warned. “People who are building sites and services have choices for what platforms they target. They can build a mobile application. They can take their content off the open web and put it into a walled garden. And so if we do things with privacy that hurt the open web, we could end up pushing people to less privacy for certain ecosystems.”

Lawrence pointed to a recent report about a popular Android app found to be leaking data. It took time to figure that out, he said, because mobile platforms are less transparent than the web, where it’s easier to scour source code and analyze network behavior.

Justin Schuh, engineering director on Google Chrome for trust and safety, reprised an argument he’s made previously that too much privacy would be harmful to ad-supported businesses.

“Most of the media that we consume is actually funded by advertising today,” Schuh explained. “It has been for a very long time. Now, I’m not here to make the argument that advertising is the best or only way to fund these things. But the truth is that print, radio, and TV, – all these are funded primarily through advertising.”

And so too is the web, he insisted, arguing that advertising is what has made so much online content available to people who otherwise wouldn’t have access to it across the globe.

Schuh said in the context of the web, two trends concern him. One, he claimed, is that content is leaving because it’s easier to monetize in apps – but he didn’t cite a basis for that assertion.

The other is the rise of covert tracking, which arose, as Schuh tells it, because advertisers wanted to track people across multiple devices. So they turned to looking at IP-based fingerprinting and metadata tracking, and the joining of data sets to identify people as they shift between phone, computer, and tablet.

Covert tracking also became more popular, he said, because advertisers wanted to bypass anti-tracking mechanisms. Thus, we have privacy-invading practices like CNAME cloaking, site fingerprinting, hostname rotation, and the like because browser users sought privacy.

Schuh made the case for Google’s Privacy Sandbox proposal, a set of controversial specs being developed ostensibly to enhance privacy by reducing data available for tracking and browser fingerprinting while also giving advertisers the ability to target ads.

“Broadly speaking, advertisers don’t actually need your data,” said Schuh. “All that they really want is to monetize efficiently.”

But given the willingness of advertisers to circumvent user privacy choices, the ad industry’s consistent failure to police bad behavior, and the persistence of ad fraud and malicious ads, it’s difficult to accept that advertisers can be trusted to behave.

Tanvi Vyas, principal engineer at Mozilla, focused on the consequences of the current web ecosystem, where data is gathered to target and manipulate people. She reeled off a list of social harms arising from the status quo.

“Democracies are compromised and elections around the world are being tampered with,” she said. “Populations are manipulated and micro-targeted. Fake news is delivered to just the right audience at the right time. Discrimination flourishes, and emotional harm is inflicted on specific individuals when our algorithms go wrong.”

Thanks, Facebook, Google, and Twitter.

Worse still, Vyas said, the hostile ecosystem has a chilling effect on sophisticated users who understand online tracking and prevents them from taking action. “At Mozilla, we think this is an unacceptable cost for society to pay,” she said.

Vyas described various pro-privacy technologies implemented in Firefox, including Facebook Container, which sandboxes Facebook trackers so they can’t track users on third-party websites. She also argued for legislation to improve online privacy, though Lawrence from his days working on Internet Explorer recalled how privacy rules tied to a privacy scheme known as P3P two decades ago had proved ineffective.

Speaking for Brave, CISO Yan Zhu argued a slightly different approach, though it still involves engaging with the ad industry to some extent.

“The main goal of Brave is we want to repair the privacy problems in the existing ad ecosystem in a way that no other browser has really tried, while giving publishers a revenue stream,” she said. “Basically, we have options to set micropayments to publishers, and also an option to see privacy preserving ads.”

Micropayments have been tried before but they’ve largely failed, assuming you don’t consider in-app payments to be micropayments.

Faced with a plea from an attendee for more of the browser makers to support micropayments instead of relying on ads, Schuh said, “I would absolutely love to see micropayments succeed. I know there have been a bunch of efforts at Google and various other companies to do it. It turns out that the payment industry itself is really, really complicated. And there are players in there that expect a fairly large cut. And so long as that exists, I don’t know if there’s a path forward.”

It now falls to Brave to prove otherwise.

Shortly thereafter, Gabriel DeWitt, VP of product at global ad marketplace Index Exchange, took a turn at the mic in the audience section in which he introduced himself and then lightheartedly asked other attendees not to throw anything at him.

Insisting that his company also cares about user privacy, despite opinions to the contrary, he asked the panelists how he could better collaborate with them.

It’s worth noting that next week, when Chrome 80 debuts, Google intends to introduce changes in the way it handles cookies that will affect advertisers. What’s more, the company has said it plans to phase out cookies entirely in a few years.

Schuh, from Google, elicited a laugh when he said, “I guess I can take this one, because that’s what everyone is expecting.”

We were expecting privacy. We got surveillance capitalism instead.

Source: Brave, Google, Microsoft, Mozilla gather together to talk web privacy… and why we all shouldn’t get too much of it • The Register

Ubiquiti says UniFi routers will beam performance data back to mothership without consent automatically, no opt-out.

Ubiquiti Networks is once again under fire for suddenly rewriting its telemetry policy after changing how its UniFi routers collect data without telling anyone.

The changes were identified in a new help document published on the US manufacturer’s website. The document differentiates between “personal data”, which includes everything that identifies a specific individual, and “other data”, which is everything else.

The document says that while users can continue to opt out of having their “personal data” collected, their “other data” – anonymous performance and crash information – will be “automatically reported”. In other words, you ain’t got no choice.

This is a shift from Ubiquiti’s last statement on data collection three months ago, which promised an opt-out button for all data collection in upcoming versions of its firmware.

A Ubiquiti representative confirmed in a forum post that the changes will automatically affect all firmware beyond 4.1.0, and that users can stop “other data” being collected by manually editing the software’s config file.

“Yes, it should be updated when we go to public release, it’s on our radar,” the rep wrote. “But I can’t guarantee it will be updated in time.”

The drama unfolded when netizens grabbed their pitchforks and headed for the company’s forums to air their grievances. “Come on UBNT,” said user leonardogyn. “PLEASE do not insist on making it hard (or impossible) to fully and easily disable sending of Analytics data. I understand it’s a great tool for you, but PLEASE consider that’s [sic] ultimately us, the users, that *must* have the option to choose to participate on it.”

The same user also pointed out that, even when the “Analytics” opt-out button is selected in the 5.13.9 beta controller software, Ubiquiti is still collecting some data. The person called the opt-out option “a misleading one, not to say a complete lie”.

Other users were similarly outraged. “This was pretty much the straw that broke the camel’s back, to be honest.” said elcid89. “I only use Unifi here at the house, but between the ongoing development instability, frenetic product range, and lack of responsiveness from staff, I’ve been considering junking it for a while now. This made the decision for me – switching over to Cisco.”

One user said that the firmware was still sending their data to two addresses even after they modified the config file.

Source: You spoke, we didn’t listen: Ubiquiti says UniFi routers will beam performance data back to mothership automatically • The Register

New NZXT Liquid CPU Cooler Plays Animated GIFs, Because Awesome!

PC hardware maker NZXT has just announced the latest additions to its line of liquid CPU coolers, the Kraken X-3 and Z-3. The X-3 has a bright LED ring and rotates so the logo can be repositioned. The Z-3 comes with a 2.36-inch, 24-bit color LCD screen capable of displaying images, computer data, or animated GIFs, because maybe that is a thing people want.

The animated GIF of the CPU cooler displaying animated GIFs atop this post? With the Kraken Z-3 installed on my PC, I could display that GIF of a CPU cooler displaying GIFs as a GIF on my CPU cooler. I could put some anime there. Or maybe some looping pornography. Then I would turn my computer to the side with the glass window facing away from me and never see it again. I need a better way to display the glowing and flashing things inside of my PC. Maybe a mirror or something.

I’ve found NZXT liquid cooling quite reliable in the past. The idea of that reliability combined with this frivolity tickles me to no end. Look, they’ve even made a little trailer showing it off.

The Kraken X-3 and Z-3 are available for purchase in the U.S. starting today. The X-3 is available in 240mm, 280mm, and 360mm sizes for $130, $150, and $180. The Z03, AKA the one with the GIFs, costs $250 for the 280mm and $280 for the 360mm size. That means the ability to have an animated GIF on your CPU cooler costs $100.

Illustration for article titled New Liquid CPU Cooler Plays Animated GIFs, Because Why Not

Worth it.

Source: New Liquid CPU Cooler Plays Animated GIFs, Because Why Not

Facebook Enables Confusing ‘Off-Facebook Activity’ Privacy Tool, which won’t stop any tracking whatsoever

In a blog post earlier today, the famously privacy-conscious Mark Zuckerberg announced that—in honor of Data Privacy Day, which is apparently a thing—the official rollout of a long-awaited Off-Facebook Activity tool that allows Facebook users to monitor and manage the connections between Facebook profiles and their off-platform activity.

“To help shed more light on these practices that are common yet not always well understood, today we’re introducing a new way to view and control your off-Facebook activity,” Zuckerberg said in the post. “Off-Facebook Activity lets you see a summary of the apps and websites that send us information about your activity, and clear this information from your account if you want to.”

Zuck’s use of the phrases “control your off-Facebook activity” and “clear this information from your account” is kinda misleading—you’re not really controlling or clearing much of anything. By using this tool, you’re just telling Facebook to put the data it has on you into two separate buckets that are otherwise mixed together. Put another way, Facebook is offering a one-stop-shop to opt-out of any ties between the sites and services you peruse daily that have some sort of Facebook software installed and your own-platform activity on Facebook or Instagram.

The only thing you’re clearing is a connection Facebook made between its data and the data it gets from third parties, not the data itself.

Illustration for article titled Facebooks Clear History Tool Doesnt Clear Shit
Image: Facebook

As an ad-tech reporter, my bread and butter involves downloading shit that does god-knows-what with your data, which is why I shouldn’t’ve been surprised that Facebook hoovered data from more 520 partners across the internet—either sites I’d visited or apps I’d downloaded. For Gizmodo alone, Facebook tracked “252 interactions” drawn from the handful of plug-ins our blog has installed. (To be clear, you’re going to run into these kinds of trackers e.v.e.r.y.w.h.e.r.e.—not just on our site.)

These plug-ins—or “business tools,” as Facebook describes them—are the pipeline that the company uses to ascertain your off-platform activity and tie it to your on-platform identity. As Facebook describes it:

– Jane buys a pair of shoes from an online clothing and shoe store.

– The store shares Jane’s activity with us using our business tools.

– We receive Jane’s off-Facebook activity and we save it with her Facebook account. The activity is saved as “visited the Clothes and Shoes website” and “made a purchase”.

– Jane sees an ad on Facebook for a 10% off coupon on her next shoe or clothing purchase from the online store.

Here’s the catch, though: When I hit the handy “clear history” button that Facebook now provides, it won’t do jack shit to stop a given shoe store from sharing my data with Facebook—which explicitly laid this out for me when I hit that button:

Your activity history will be disconnected from your account. We’ll continue to receive your activity from the businesses and organizations you visit in the future.

Yes, it’s confusing. Baffling, really. But basically, Facebook has profiles on users and non-users alike. Those of you who have Facebook profiles can use the new tool to disconnect your Facebook data from the data the company receives from third parties. Facebook will still have that third-party-collected data and it will continue to collect more data, but that bucket of data won’t be connected to your Facebook identity.

Illustration for article titled Facebooks Clear History Tool Doesnt Clear Shit
Screenshot: Gizmodo (Facebook)

The data third parties collect about you technically isn’t Facebook’s responsibility, to begin with. If I buy a pair of new sneakers from Steve Madden where that purchase or browsing data goes is ultimately in Steve Madden’s metaphorical hands. And thanks to the wonders of targeted advertising, even the sneakers I’m purchasing in-store aren’t safe from being added as a data point that can be tied to the collective profile Facebook’s gathered on me as a consumer. Naturally, it behooves whoever runs marketing at Steve Madden—or anywhere, really—to plug in as many of those data points as they possibly can.

For the record, I also tried toggling my off-Facebook activity to keep it from being linked to my account, but was told that, while the company would still be getting this information from third parties, it would just be “disconnected from [my] account.”

Put another way: The way I browse any number of sites and apps will ultimately still make its way to Facebook, and still be used for targeted advertising across… those sites and apps. Only now, my on-Facebook life—the cat groups I join, the statuses I comment on, the concerts I’m “interested” in (but never actually attend)—won’t be a part of that profile.

Or put another way: Facebook just announced that it still has its tentacles in every part of your life in a way that’s impossible to untangle yourself from. Now, it just doesn’t need the social network to do it.

Source: Facebook Enables Confusing ‘Off-Facebook Activity’ Privacy Tool

Google releases new dataset search

You can now filter the results based on the types of dataset that you want (e.g., tables, images, text), or whether the dataset is available for free from the provider. If a dataset is about a geographic area, you can see the map. Plus, the product is now available on mobile and we’ve significantly improved the quality of dataset descriptions. One thing hasn’t changed however: anybody who publishes data can make their datasets discoverable in Dataset Search by using an open standard (schema.org) to describe the properties of their dataset on their own web page.

Source: Discovering millions of datasets on the web

Find it here

Leaked AVAST Documents Expose the Secretive Market for Your Web Browsing Data: Google, MS, Pepsi, they all buy it – Really, uninstall it now!

An antivirus program used by hundreds of millions of people around the world is selling highly sensitive web browsing data to many of the world’s biggest companies, a joint investigation by Motherboard and PCMag has found. Our report relies on leaked user data, contracts, and other company documents that show the sale of this data is both highly sensitive and is in many cases supposed to remain confidential between the company selling the data and the clients purchasing it.

The documents, from a subsidiary of the antivirus giant Avast called Jumpshot, shine new light on the secretive sale and supply chain of peoples’ internet browsing histories. They show that the Avast antivirus program installed on a person’s computer collects data, and that Jumpshot repackages it into various different products that are then sold to many of the largest companies in the world. Some past, present, and potential clients include Google, Yelp, Microsoft, McKinsey, Pepsi, Sephora, Home Depot, Condé Nast, Intuit, and many others. Some clients paid millions of dollars for products that include a so-called “All Clicks Feed,” which can track user behavior, clicks, and movement across websites in highly precise detail.

Avast claims to have more than 435 million active users per month, and Jumpshot says it has data from 100 million devices. Avast collects data from users that opt-in and then provides that to Jumpshot, but multiple Avast users told Motherboard they were not aware Avast sold browsing data, raising questions about how informed that consent is.

The data obtained by Motherboard and PCMag includes Google searches, lookups of locations and GPS coordinates on Google Maps, people visiting companies’ LinkedIn pages, particular YouTube videos, and people visiting porn websites. It is possible to determine from the collected data what date and time the anonymized user visited YouPorn and PornHub, and in some cases what search term they entered into the porn site and which specific video they watched.

[…]

Until recently, Avast was collecting the browsing data of its customers who had installed the company’s browser plugin, which is designed to warn users of suspicious websites. Security researcher and AdBlock Plus creator Wladimir Palant published a blog post in October showing that Avast harvest user data with that plugin. Shortly after, browser makers Mozilla, Opera, and Google removed Avast’s and subsidiary AVG’s extensions from their respective browser extension stores. Avast had previously explained this data collection and sharing in a blog and forum post in 2015. Avast has since stopped sending browsing data collected by these extensions to Jumpshot, Avast said in a statement to Motherboard and PCMag.

[…]

However, the data collection is ongoing, the source and documents indicate. Instead of harvesting information through software attached to the browser, Avast is doing it through the anti-virus software itself. Last week, months after it was spotted using its browser extensions to send data to Jumpshot, Avast began asking its existing free antivirus consumers to opt-in to data collection, according to an internal document.

“If they opt-in, that device becomes part of the Jumpshot Panel and all browser-based internet activity will be reported to Jumpshot,” an internal product handbook reads. “What URLs did these devices visit, in what order and when?” it adds, summarising what questions the product may be able to answer.

Senator Ron Wyden, who in December asked Avast why it was selling users’ browsing data, said in a statement, “It is encouraging that Avast has ended some of its most troubling practices after engaging constructively with my office. However I’m concerned that Avast has not yet committed to deleting user data that was collected and shared without the opt-in consent of its users, or to end the sale of sensitive internet browsing data. The only responsible course of action is to be fully transparent with customers going forward, and to purge data that was collected under suspect conditions in the past.”

[…]

On its website and in press releases, Jumpshot names Pepsi, and consulting giants Bain & Company and McKinsey as clients.

As well as Expedia, Intuit, and Loreal, other companies which are not already mentioned in public Jumpshot announcements include coffee company Keurig, YouTube promotion service vidIQ, and consumer insights firm Hitwise. None of those companies responded to a request for comment.

On its website, Jumpshot lists some previous case studies for using its browsing data. Magazine and digital media giant Condé Nast, for example, used Jumpshot’s products to see whether the media company’s advertisements resulted in more purchases on Amazon and elsewhere. Condé Nast did not respond to a request for comment.

ALL THE CLICKS

Jumpshot sells a variety of different products based on data collected by Avast’s antivirus software installed on users’ computers. Clients in the institutional finance sector often buy a feed of the top 10,000 domains that Avast users are visiting to try and spot trends, the product handbook reads.

Another Jumpshot product is the company’s so-called “All Click Feed.” It allows a client to buy information on all of the clicks Jumpshot has seen on a particular domain, like Amazon.com, Walmart.com, Target.com, BestBuy.com, or Ebay.com.

In a tweet sent last month intended to entice new clients, Jumpshot noted that it collects “Every search. Every click. Every buy. On every site” [emphasis Jumpshot’s.]

[…]

One company that purchased the All Clicks Feed is New York-based marketing firm Omnicom Media Group, according to a copy of its contract with Jumpshot. Omnicom paid Jumpshot $2,075,000 for access to data in 2019, the contract shows. It also included another product called “Insight Feed” for 20 different domains. The fee for data in 2020 and then 2021 is listed as $2,225,000 and $2,275,000 respectively, the document adds.

[…]

The internal product handbook says that device IDs do not change for each user, “unless a user completely uninstalls and reinstalls the security software.”

Source: Leaked Documents Expose the Secretive Market for Your Web Browsing Data – VICE

Ring Doorbell App Gives Away your data to 3rd parties, without your knowledge or consent

An investigation by EFF of the Ring doorbell app for Android found it to be packed with third-party trackers sending out a plethora of customers’ personally identifiable information (PII). Four main analytics and marketing companies were discovered to be receiving information such as the names, private IP addresses, mobile network carriers, persistent identifiers, and sensor data on the devices of paying customers.

The danger in sending even small bits of information is that analytics and tracking companies are able to combine these bits together to form a unique picture of the user’s device. This cohesive whole represents a fingerprint that follows the user as they interact with other apps and use their device, in essence providing trackers the ability to spy on what a user is doing in their digital lives and when they are doing it. All this takes place without meaningful user notification or consent and, in most cases, no way to mitigate the damage done. Even when this information is not misused and employed for precisely its stated purpose (in most cases marketing), this can lead to a whole host of social ills.

[…]

Our testing, using Ring for Android version 3.21.1, revealed PII delivery to branch.io, mixpanel.com, appsflyer.com and facebook.com. Facebook, via its Graph API, is alerted when the app is opened and upon device actions such as app deactivation after screen lock due to inactivity. Information delivered to Facebook (even if you don’t have a Facebook account) includes time zone, device model, language preferences, screen resolution, and a unique identifier (anon_id), which persists even when you reset the OS-level advertiser ID.

Branch, which describes itself as a “deep linking” platform, receives a number of unique identifiers (device_fingerprint_id, hardware_id, identity_id) as well as your device’s local IP address, model, screen resolution, and DPI.

AppsFlyer, a big data company focused on the mobile platform, is given a wide array of information upon app launch as well as certain user actions, such as interacting with the “Neighbors” section of the app. This information includes your mobile carrier, when Ring was installed and first launched, a number of unique identifiers, the app you installed from, and whether AppsFlyer tracking came preinstalled on the device. This last bit of information is presumably to determine whether AppsFlyer tracking was included as bloatware on a low-end Android device. Manufacturers often offset the costs of device production by selling consumer data, a practice that disproportionately affects low-income earners and was the subject of a recent petition to Google initiated by Privacy International and co-signed by EFF.

Most alarmingly, AppsFlyer also receives the sensors installed on your device (on our test device, this included the magnetometer, gyroscope, and accelerometer) and current calibration settings.

Ring gives MixPanel the most information by far. Users’ full names, email addresses, device information such as OS version and model, whether bluetooth is enabled, and app settings such as the number of locations a user has Ring devices installed in, are all collected and reported to MixPanel. MixPanel is briefly mentioned in Ring’s list of third party services, but the extent of their data collection is not. None of the other trackers listed in this post are mentioned at all on this page.

Ring also sends information to the Google-owned crash logging service Crashalytics. The exact extent of data sharing with this service is yet to be determined.

Source: Ring Doorbell App Packed with Third-Party Trackers | Electronic Frontier Foundation

Electric Vehicle Battery Degradation Graph with 6 years data

These guys have 6 years of battery data on a range of electric cars. Each model is different in terms of degradation, but it seems that over six years time you lose around 12% of your battery capacity. This means that if your car was able to drive, say 523 km (Tesla Model X), after 6 years you can expect it to have a range of 460km. So long as the graph continues, after 12 years you have a 397km range.

electric vehicle battery degradation

Source: Geotab – EV Battery Degradation

Class-action lawsuit filed against creepy Clearview AI startup which scraped everyones social media profiles

A lawsuit — seeking class-action status — was filed this week in Illinois against Clearview AI, a New York-based startup that has scraped social media networks for people’s photos and created one of the biggest facial recognition databases in the world.

The secretive startup was exposed last week in an explosive New York Times report which revealed how Clearview was selling access to “faceprints” and facial recognition software to law enforcement agencies across the US. The startup claimed it could identify a person based on a single photo, revealing their real name, general location, and other identifiers.

The report sparked outrage among US citizens, who had photos collected and added to the Clearview AI database without their consent. The Times reported that the company collected more than three billion photos, from sites such as Facebook, Twitter, YouTube, Venmo, and others.

This week, the company was hit with the first lawsuit in the aftermath of the New York Times exposé.

Lawsuit claims Clearview AI broke BIPA

According to a copy of the complaint obtained by ZDNet, plaintiffs claim Clearview AI broke Illinois privacy laws.

Namely, the New York startup broke the Illinois Biometric Information Privacy Act (BIPA), a law that safeguards state residents from having their biometrics data used without consent.

According to BIPA, companies must obtain explicit consent from Illinois residents before collecting or using any of their biometric information — such as the facial scans Clearview collected from people’s social media photos.

“Plaintiff and the Illinois Class retain a significant interest in ensuring that their biometric identifiers and information, which remain in Defendant Clearview’s possession, are protected from hacks and further unlawful sales and use,” the lawsuit reads.

“Plaintiff therefore seeks to remedy the harms Clearview and the individually-named defendants have already caused, to prevent further damage, and to eliminate the risks to citizens in Illinois and throughout the United States created by Clearview’s business misuse of millions of citizen’s biometric identifiers and information.”

The plaintiffs are asking the court for an injunction against Clearview to stop it from selling the biometric data of Illinois residents, a court order forcing the company to delete any Illinois residents’ data, and punitive damage, to be decided by the court at a later date.

“Defendants’ violation of BIPA was intentional or reckless or, pleaded in the alternative, negligent,” the complaint reads.

Clearview AI did not return a request for comment.

Earlier this week, US lawmakers also sought answers from the company, while Twitter sent a cease-and-desist letter demanding the startup stop collecting user photos from their site and delete any existing images.

Source: Class-action lawsuit filed against controversial Clearview AI startup | ZDNet

London Police Will Start Using Live Facial Recognition Tech Now, Big Brother becomes a computer watching you

The dystopian nightmare begins. Today, London’s Metropolitan Police Service announced it will begin deploying Live Facial Recognition (LFR) tech across the capital in the hopes of locating and arresting wanted peoples.

[…]

The way the system is supposed to work, according to the Metropolitan Police, is the LFR cameras will first be installed in areas where ‘intelligence’ suggests the agency is most likely to locate ‘serious offenders.’ Each deployment will supposedly have a ‘bespoke’ watch list comprising images of wanted suspects for serious and violent offenses. The London police also note the cameras will focus on small, targeted areas to scan folks passing by. According to BBC News, previous trials had taken place in areas such as Stratford’s Westfield shopping mall and the West End area of London. It seems likely the agency is also anticipating some unease, as the cameras will be ‘clearly signposted’ and officers are slated to hand out informational leaflets.

The agency’s statement also emphasizes that the facial recognition tech is not meant to replace policing—just ‘prompt’ officers by suggesting a person in the area may be a fishy individual…based solely on their face. “It is always the decision of an officer whether or not to engage with someone,” the statement reads. On Twitter, the agency also noted in a short video that images that don’t trigger alerts will be immediately deleted.

As with any police-related, Minority Report-esque tech, accuracy is a major concern. While the Metropolitan Police Service claims that 70 percent of suspects were successfully identified and that only one in 1,000 people created a fake alert, not everyone agrees the LFR tech is rock-solid. An independent review from July 2019 found that in six of the trial deployments, only eight of 42 matches were correct for an abysmal 19 percent accuracy. Other problems found by the review included inaccurate watch list information (e.g., people were stopped for cases that had already been resolved), and the criteria for people being included on the watchlist weren’t clearly defined.

Privacy groups aren’t particularly happy with the development. Big Brother Watch, a privacy campaign group that’s been particularly vocal against facial recognition tech, took to Twitter, telling the Metropolitan Police Service they’d “see them in court.”

“This decision represents an enormous expansion of the surveillance state and a serious threat to civil liberties in the UK,” said Silkie Carlo, Big Brother Watch’s director, in a statement. “This is a breath-taking assault on our rights and we will challenge it, including by urgently considering next steps in our ongoing legal claim against the Met and the Home Secretary.”

Meanwhile, another privacy group Liberty, has also voiced resistance to the measure. “Rejected by democracies. Embraced by oppressive regimes. Rolling out facial recognition surveillance tech is a dangerous and sinister step in giving the State unprecedented power to track and monitor any one of us. No thanks,” the group tweeted.

Source: London Police Will Start Using Live Facial Recognition Tech

GE Fridges Won’t Dispense Ice Or Water Unless Your Water Filter ‘Authenticates’ Via RFID Chip on their rip off expensive water filter

Count GE in on the “screw your customers” bandwagon. Twitter user @ShaneMorris tweeted: “My fridge has an RFID chip in the water filter, which means the generic water filter I ordered for $19 doesn’t work. My fridge will literally not dispense ice, or water. I have to pay General Electric $55 for a water filter from them.” Fortunately, there appears to be a way to hack them to work: How to Hack RWPFE Water Filters for Your GE Fridge. Hacks aside, count me out from ever buying another GE product if it includes anti-customer “features” like these. “The difference between RWPF and RPWFE is that the RPWFE has a freaking RFID chip on it,” writes Jack Busch from groovyPost. “The fridge reads the RFID chip off your filter, and if your filter is either older than 6 months or not a genuine GE RPWFE filter, it’s all ‘I’m sorry, Dave, I’m afraid I can’t dispense any water for you right now.’ Now, to be fair, GE does give you a bypass cartridge that lets you get unfiltered water for free (you didn’t throw that thing away, did you?). But come on…”

Jack proceeds to explain how you can pop off the filter bypass and “try taping the thing directly into your fridge where it would normally meet up when the filter is install.” If you’re able to get it in just the right spot, “you’re set for life,” says Jack. Alternatively, “you can tape it onto the front of an expired RPWFE GE water filter, install it backward, and then keep using it (again, not recommended for too much longer than six months). Or, you can tape it to the corresponding spot on a generic filter and reinstall it.”

Source: GE Fridges Won’t Dispense Ice Or Water Unless Your Water Filter ‘Authenticates’ Via RFID Chip – Slashdot

Sonos CEO apologizes for confusion, says legacy products will work ‘as long as possible’ – however long that is

Sonos CEO Patrick Spence just published a statement on the company’s website to try to clear up an announcement made earlier this week: on Tuesday, Sonos announced that it will cease delivering software updates and new features to its oldest products in May. The company said those devices should continue functioning properly in the near term, but it wasn’t enough to prevent an uproar from longtime customers, with many blasting Sonos for what they perceive as planned obsolescence. That frustration is what Spence is responding to today. “We heard you,” is how Spence begins the letter to customers. “We did not get this right from the start.”

Spence apologizes for any confusion and reiterates that the so-called legacy products will “continue to work as they do today.” Legacy products include the original Sonos Play:5, Zone Players, and Connect / Connect:Amp devices manufactured between 2011 and 2015.

“Many of you have invested heavily in your Sonos systems, and we intend to honor that investment for as long as possible.” Similarly, Spence pledges that Sonos will deliver bug fixes and security patches to legacy products “for as long as possible” — without any hard timeline. Most interesting, he says “if we run into something core to the experience that can’t be addressed, we’ll work to offer an alternative solution and let you know about any changes you’ll see in your experience.”

The letter from Sonos’ CEO doesn’t retract anything that the company announced earlier this week; Spence is just trying to be as clear as possible about what’s happening come May. Sonos has insisted that these products, some of which are a decade old, have been taken to their technological limits.

Spence again confirms that Sonos is planning a way for customers to fork any legacy devices they might own off of their main Sonos system with more modern speakers. (Sonos architected its system so that all devices share the same software. Once one product is no longer eligible for updates, the whole setup stops receiving them. This workaround is designed to avoid that problem.)

Source: Sonos CEO apologizes for confusion, says legacy products will work ‘as long as possible’ – The Verge