The Linkielist

Linking ideas with the world

The Linkielist

In ‘Sophisticated’ Incident, Dozens of U.N. Servers Hacked including their active directory server

An internal confidential document from the United Nations, leaked to The New Humanitarian and seen by The Associated Press, says that dozens of servers were “compromised” at offices in Geneva and Vienna.

Those include the U.N. human rights office, which has often been a lightning rod of criticism from autocratic governments for its calling-out of rights abuses.

One U.N. official told the AP that the hack, which was first detected over the summer, appeared “sophisticated” and that the extent of the damage remains unclear, especially in terms of personal, secret or compromising information that may have been stolen. The official, who spoke only on condition of anonymity to speak freely about the episode, said systems have since been reinforced.

The level of sophistication was so high that it was possible a state-backed actor might have been behind it, the official said.

There were conflicting accounts about the significance of the incursion.

“We were hacked,” U.N. human rights office spokesman Rupert Colville. “We face daily attempts to get into our computer systems. This time, they managed, but it did not get very far. Nothing confidential was compromised.”

The breach, at least at the human rights office, appears to have been limited to the so-called active directory – including a staff list and details like e-mail addresses – but not access to passwords. No domain administration’s account was compromised, officials said.

The United Nations headquarters in New York as well as the U.N.’s sprawling Palais des Nations compound in Geneva, its European headquarters, did not immediately respond to questions from the AP about the incident.

Sensitive information at the human rights office about possible war criminals in the Syrian conflict and perpetrators of Myanmar’s crackdown against Rohingya Muslims were not compromised, because it is held in extremely secure conditions, the official said.

The internal document from the U.N. Office of Information and Technology said 42 servers were “compromised” and another 25 were deemed “suspicious,” nearly all at the sprawling United Nations offices in Geneva and Vienna. Three of the “compromised” servers belonged to the Office of the High Commissioner for Human Rights, which is located across town from the main U.N. office in Geneva, and two were used by the U.N. Economic Commission for Europe.

Technicians at the United Nations office in Geneva, the world body’s European hub, on at least two occasions worked through weekends in recent months to isolate the local U.N. data center from the Internet, re-write passwords and ensure the systems were clean.

The hack comes amid rising concerns about computer or mobile phone vulnerabilities, both for large organizations like governments and the U.N. as well as for individuals and businesses.

Source: In ‘Sophisticated’ Incident, Dozens of U.N. Servers Hacked | Time

They are downplaying the importance of an Active Directory server – it contains all the users and their details, so it’s a pretty big deal.

Social media scrapers Clearview Lied About Its Crime-Solving Power In Pitches To Law Enforcement Agencies

A very questionable facial recognition tool being offered to law enforcement was recently exposed by Kashmir Hill for the New York Times. Clearview — created by a developer previously best known for an app that let people put Trump’s “hair” on their own photos — is being pitched to law enforcement agencies as a better AI solution for all their “who TF is this guy” problems.

Clearview doesn’t limit itself to law enforcement databases — ones (partially) filled with known criminals and arrestees. Instead of using known quantities, Clearview scrapes the internet for people’s photos. With the click of an app button, officers are connected to Clearview’s stash of 3 billion photos pulled from public feeds on Twitter, LinkedIn, and Facebook.

Most of the scrapees have already objected to being scraped. While this may violate terms of service, it’s not completely settled that scraping content from public feeds is actually illegal. However, peeved companies can attempt to shut off their firehoses, which is what Twitter is in the process of doing.

Clearview has made some bold statements about its effectiveness — statements that haven’t been independently confirmed. Clearview did not submit its software to NIST’s recent roundup of facial recognition AI, but it most likely would not have fared well. Even more established software performed poorly, misidentifying minorities almost 100 times more often than it did white males.

The company claims it finds matches 75% of the time. That doesn’t actually mean it finds the right person 75% of the time. It only means the software finds someone that matches submitted photos three-quarters of the time. Clearview has provided no stats on its false positive rate. That hasn’t stopped it from lying about its software and its use by law enforcement agencies.

A BuzzFeed report based on public records requests and conversations with the law enforcement agencies says the company’s sales pitches are about 75% bullshit.

Clearview AI, a facial recognition company that says it’s amassed a database of billions of photos, has a fantastic selling point it offers up to police departments nationwide: It cracked a case of alleged terrorism in a New York City subway station last August in a matter of seconds. “How a Terrorism Suspect Was Instantly Identified With Clearview,” read the subject line of a November email sent to law enforcement agencies across all 50 states through a crime alert service, suggesting its technology was integral to the arrest.

Here’s what the NYPD had to say about Clearview’s claims in its marketing materials:

“The NYPD did not use Clearview technology to identify the suspect in the August 16th rice cooker incident,” a department spokesperson told BuzzFeed News. “The NYPD identified the suspect using the Department’s facial recognition practice where a still image from a surveillance video was compared to a pool of lawfully possessed arrest photos.”

The NYPD also said it had no “institutional relationship” with Clearview, contradicting the company’s sales pitch insinuations. The NYPD was not alone in its rejection of Clearview’s claims.

Clearview also claimed to be instrumental in apprehending a suspect wanted for assault. In reality, the suspect turned himself in to the NYPD. The PD again pointed out Clearview played no role in this investigation. It also had nothing to do with solving a subway groping case (the tip that resulted in an arrest was provided to the NYPD by the Guardian Angels) or an alleged “40 cold cases solved” by the NYPD.

The company says it is “working with” over 600 police departments. But BuzzFeed’s investigation has uncovered at least two cases where “working with” simply meant submitting a lead to a PD tip line. Most likely, this is only the tip of the iceberg. As more requested documents roll in, there’s a very good chance this “working with” BS won’t just be a two-off.

Clearview’s background appears to be as shady as its public claims. In addition to its founder’s links to far right groups (first uncovered by Kashmir Hill), its founder pumped up the company’s reputation by deploying a bunch of sock puppets.

Ton-That set up fake LinkedIn profiles to run ads about Clearview, boasting that police officers could search over 1 billion faces in less than a second.

These are definitely not the ethics you want to see from a company pitching dubious facial recognition software to law enforcement agencies. Some agencies may perform enough due diligence to move forward with a more trustworthy company, but others will be impressed with the lower cost and the massive amount of photos in Clearview’s database and move forward with unproven software created by a company that appears to be willing to exaggerate its ability to help cops catch crooks.

If it can’t tell the truth about its contribution to law enforcement agencies, it’s probably not telling the truth about the software’s effectiveness. If cops buy into Clearview’s PR pitches, the collateral damage will be innocent people’s freedom.

Source: Facial Recognition Company Clearview Lied About Its Crime-Solving Power In Pitches To Law Enforcement Agencies | Techdirt

MIDI 2.0 overhauls the music interface for the first time in 35 years

About 35 years after the MIDI 1.0 Detailed Specification was established, instrument manufacturers voted unanimously on January 18th to adopt the new MIDI 2.0 spec. So what’s changing for audio interfaces? The “biggest advance in music technology in decades” brings two-way communication, among many other new features while remaining backwards compatible with the old spec.

Companies like Roland, Native Instruments, Korg and Yamaha are part of the MIDI Manufacturers Association behind the update, and we’ve already seen Roland’s A-88MKII keyboard that will be ready for the spec when it goes on sale in March.

MIDI

And it’s about time for a new standard, while the 5-bit DIN cables used in the 1980s couldn’t handle high resolution audio, the MIDI 2.0 spec is ready for any digital connector you’d like to use, and will start by targeting USB ports. That allows for far more accurate timing, and far more resolution by upgrading messages from seven bits to as much as 32-bit.

It should also make instruments easier to use, with profiles that will automatically set up gear for its intended use and a feature called Property Exchange that uses JSON (JavaScript Object Notation) to send over more detailed configuration info. You’ll spend less time shuffling through presets and more time simply making music, plus some of these features can be used even on older MIDI 1.0-spec hardware. As Reverb.com notes, there’s still room for improvement on things like networking multiple devices, but it represents a massive upgrade over the old standard, and will be useful for anyone trying to make a Grammy-winning album, whether it’s in their bedroom or a fully-kitted studio.

Source: MIDI 2.0 overhauls the music interface for the first time in 35 years | Engadget

Mozilla moves to monetize Thunderbird, transfers project to new subsidiary

The Mozilla Foundation announced today that it was moving the Thunderbird email client to a new subsidiary named the MZLA Technologies Corporation.

Mozilla said that Thunderbird will continue to remain free and open source, but by moving the project away from its foundation into a corporate entity they will be able to monetize the product and pay for its development easier than before.

Currently, Thunderbird is primarily being kept alive through charitable donations from the product’s userbase.

“Moving to MZLA Technologies Corporation will not only allow the Thunderbird project more flexibility and agility, but will also allow us to explore offering our users products and services that were not possible under the Mozilla Foundation,” said Philipp Kewisch, Mozilla Product Manager.

“The move will allow the project to collect revenue through partnerships and non-charitable donations, which in turn can be used to cover the costs of new products and services,” Kewisch added.

Source: Mozilla moves to monetize Thunderbird, transfers project to new subsidiary | ZDNet

Google to translate and transcribe conversations in real time

Google on Tuesday unveiled a feature that’ll let people use their phones to both transcribe and translate a conversation in real time into a language that isn’t being spoken. The tool will be available for the Google Translate app in the coming months, said Bryan Lin, an engineer on the Translate team.

Right now the feature is being tested in several languages, including Spanish, German and French. Lin said the computing will take place on Google’s servers and not on people’s devices.

Source: Google to translate and transcribe conversations in real time – CNET

Clearview AI Told Cops To “Run Wild” With Its Creepy Face database, access given away without checks and sold to private firms despite claiming otherwise

Clearview AI, the facial recognition company that claims to have amassed a database of more than 3 billion photos scraped from Facebook, YouTube, and millions of other websites, is scrambling to deal with calls for bans from advocacy groups and legal threats. These troubles come after news reports exposed its questionable data practices and misleading statements about working with law enforcement.

Following stories published in the New York Times and BuzzFeed News, the Manhattan-based startup received cease-and-desist letters from Twitter and the New Jersey attorney general. It was also sued in Illinois in a case seeking class-action status.

Despite its legal woes, Clearview continues to contradict itself, according to documents obtained by BuzzFeed News that are inconsistent with what the company has told the public. In one example, the company, whose code of conduct states that law enforcement should only use its software for criminal investigations, encouraged officers to use it on their friends and family members.

“To have these technologies rolled out by police departments without civilian oversight really raises fundamental questions about democratic accountability,” Albert Fox Cahn, a fellow at New York University and the executive director of the Surveillance Technology Oversight Project, told BuzzFeed News.

In the aftermath of revelations about its technology, Clearview has tried to clean up its image by posting informational webpages, creating a blog, and trotting out surrogates for media interviews, including one in which an investor claimed Clearview was working with “over a thousand independent law enforcement agencies.” Previously, Clearview had stated that the number was around 600.

Clearview has also tried to allay concerns that its technology could be abused or used outside the scope of police investigations. In a code of conduct that the company published on its site earlier this month, it said its users should “only use the Services for law enforcement or security purposes that are authorized by their employer and conducted pursuant to their employment.”

It bolstered that idea with a blog post on Jan. 23, which stated, “While many people have advised us that a public version would be more profitable, we have rejected the idea.”

“Clearview exists to help law enforcement agencies solve the toughest cases, and our technology comes with strict guidelines and safeguards to ensure investigators use it for its intended purpose only,” the post stated.

But in a November email to a police lieutenant in Green Bay, Wisconsin, a company representative encouraged a police officer to use the software on himself and his acquaintances.

“Have you tried taking a selfie with Clearview yet?” the email read. “It’s the best way to quickly see the power of Clearview in real time. Try your friends or family. Or a celebrity like Joe Montana or George Clooney.

“Your Clearview account has unlimited searches. So feel free to run wild with your searches,” the email continued. The city of Green Bay would later agree on a $3,000 license with Clearview.

Via Obtained by BuzzFeed News

An email from Clearview to an officer in Green Bay, Wisconsin, from November 2019.

Hoan Ton-That, the CEO of Clearview, claimed in an email that the company has safeguards on its product.

“As as [sic] safeguard we have an administrative tool for Law Enforcement supervisors and administrators to monitor the searches of a particular department,” Ton-That said. “An administrator can revoke access to an account at any time for any inappropriate use.”

Clearview’s previous correspondence with Green Bay police appeared to contradict what Ton-That told BuzzFeed News. In emails obtained by BuzzFeed News, the company told officers that searches “are always private and never stored in our proprietary database, which is totally separate from the photos you search.”

“So feel free to run wild with your searches.”

“It’s certainly inconsistent to, on the one hand, claim that this is a law enforcement tool and that there are safeguards — and then to, on the other hand, recommend it being used on friends and family,” Clare Garvie, a senior associate at the Georgetown Law’s Center on Privacy and Technology, told BuzzFeed News.

Clearview has also previously instructed police to act in direct violation of the company’s code of conduct, which was outlined in a blog post on Monday. The post stated that law enforcement agencies were “required” to receive permission from a supervisor before creating accounts.

But in a September email sent to police in Green Bay, the company said there was an “Invite User” button in the Clearview app that can be used to give any officer access to the software. The email encouraged police officers to invite as many people as possible, noting that Clearview would give them a demo account “immediately.”

“Feel free to refer as many officers and investigators as you want,” the email said. “No limits. The more people searching, the more successes.”

“Rewarding loyal customers”

Despite its claim last week that it “exists to help law enforcement agencies,” Clearview has also been working with entities outside of law enforcement. Ton-That told BuzzFeed News on Jan. 23 that Clearview was working with “a handful of private companies who use it for security purposes.” Marketing emails from late last year obtained by BuzzFeed News via a public records request showed the startup aided a Georgia-based bank in a case involving the cashing of fraudulent checks.

Earlier this year, a company representative was slated to speak at a Las Vegas gambling conference about casinos’ use of facial recognition as a way of “rewarding loyal customers and enforcing necessary bans.” Initially, Jessica Medeiros Garrison, whose title was stated on the conference website as Clearview’s vice president of public affairs, was listed on a panel that included the head of surveillance for Las Vegas’ Cosmopolitan hotel. Later versions of the conference schedule and Garrison’s bio removed all mentions of Clearview AI. It is unclear if she actually appeared on the panel.

A company spokesperson said Garrison is “a valued member of the Clearview team” but declined to answer questions on any possible work with casinos.

Cease and desist

Clearview has also faced legal threats from private and government entities. Last week, Twitter sent the company a cease-and-desist letter, noting that its claim to have collected photos from its site was in violation of the social network’s terms of service.

“This type of use (scraping Twitter for people’s images/likeness) is not allowed,” a company spokesperson told BuzzFeed News. The company, which asked Clearview to cease scraping and delete all data collected from Twitter, pointed BuzzFeed News to a part of its developer policy, which states it does not allow its data to be used for facial recognition.

On Friday, Clearview received a similar note from the New Jersey attorney general, who called on state law enforcement agencies to stop using the software. The letter also told Clearview to stop using clips of New Jersey Attorney General Gurbir Grewal in a promotional video on its site that claimed that a New Jersey police department used the software in a child predator sting late last year.

[…]

Clearview declined to provide a list of law enforcement agencies that were on free trials or paid contracts, stating that it was more than 600.

“We do not have to be hidden”

That number is lower than what one of Clearview’s investors bragged about on Saturday. David Scalzo, an early investor in Clearview through his firm, Kirenaga Partners, claimed in an interview with Dilbert creator and podcaster Scott Adams that “over a thousand independent law enforcement agencies” were using the software. The investor went on to contradict the company’s public statement that it would not make its tool available to the public, stating “it is inevitable that this digital information will be out there” and “the best thing we can do is get this technology out to everyone.”

[…]

EPIC’s letter came after an Illinois resident sued Clearview in a state district court last Wednesday, alleging the software violated the Illinois Biometric Information Privacy Act by collecting the “identifiers and information” — like facial data gathered from photos accumulated from social media — without permission. Under the law, private companies are not allowed to “collect, capture, purchase,” or receive biometric information about a person without their consent.

The complaint, which also alleged that Clearview violated the constitutional rights of all Americans, asked for class-action recognition on behalf of all US citizens, as well as all Illinois residents whose biometric information was collected. When asked, Ton-That did not comment on the lawsuit.

In legal documents given to police, obtained by BuzzFeed News through a public records request, Clearview argued that it was not subject to states’ biometric data laws including those in Illinois. In a memo to the Atlanta Police Department, a lawyer for Clearview argued that because the company’s clients are public agencies, the use of the startup’s technology could not be regulated by state law, which only governs private entities.

Cahn, the executive director of the Surveillance Technology Oversight Project, said that it was “problematic” for Clearview AI to argue it wasn’t beholden to state biometric laws.

“Those laws regulate the commercial use of these sorts of tools, and the idea that somehow this isn’t a commercial application, simply because the customer is the government, makes no sense,” he said. “This is a company with private funders that will be profiting from the use of our information.”

Under the attention, Clearview added explanations to its site to deal with privacy concerns. It added an email link for people to ask questions about its privacy policy, saying that all requests will go to its data protection officer. When asked by BuzzFeed News, the company declined to name that official.

To process a request, however, Clearview is requesting more personal information: “Please submit name, a headshot and a photo of a government-issued ID to facilitate the processing of your request.“ The company declined to say how it would use that information.

Source: Clearview AI Once Told Cops To “Run Wild” With Its Facial Recognition Tool

Brave, Google, Microsoft, Mozilla gather together to talk web privacy… and why we all shouldn’t get too much of it. Only FF and Brave will give you some.

At the USENIX Enigma conference on Tuesday, representatives of four browser makers, Brave, Google, Microsoft, and Mozilla, gathered to banter about their respective approaches to online privacy, while urging people not to ask for too much of it.

Apple, which has advanced browser privacy standards but was recently informed that its tracking defenses can be used for er, tracking, was conspicuously absent, though it had a tongue-tied representative recruiting for privacy-oriented job positions at the show.

The browser-focused back-and-forth was mostly cordial as the software engineers representing their companies discussed notable privacy features in the various web browsers they worked on. They stressed the benefit of collaboration on web standards and the mutually beneficial effects of competition.

Eric Lawrence, program manager on the Microsoft Edge team, touched on how Microsoft has just jettisoned 25 years of Internet Explorer code to replatform Edge on the open source Chromium project, now the common foundation for 20 or so browsers.

Beside a slide that declared “Microsoft loves the Web,” Lawrence made the case for the new Edge as a modern browser with some well-designed privacy features, including Microsoft’s take on tracking protection, which blocks most trackers in its default setting and can be made more strict, at the potential cost of site compatibility.

A slide at Enigma 2020 saying Microsoft loves the Web;

Edge comes across as a reliable alternative to Chrome and should become more distinct as it evolves. It occupies a difficult space on the privacy continuum, in that it has some nice privacy features but not as many as Brave or Firefox. But Edge may find fans on the strength of the Microsoft brand since, as Lawrence emphasized, Microsoft is not new to privacy concerns.

That said, Microsoft is not far from Google in advocating not biting the hand that feeds the web ecosystem – advertising.

“The web doesn’t exist in a vacuum,” Lawrence warned. “People who are building sites and services have choices for what platforms they target. They can build a mobile application. They can take their content off the open web and put it into a walled garden. And so if we do things with privacy that hurt the open web, we could end up pushing people to less privacy for certain ecosystems.”

Lawrence pointed to a recent report about a popular Android app found to be leaking data. It took time to figure that out, he said, because mobile platforms are less transparent than the web, where it’s easier to scour source code and analyze network behavior.

Justin Schuh, engineering director on Google Chrome for trust and safety, reprised an argument he’s made previously that too much privacy would be harmful to ad-supported businesses.

“Most of the media that we consume is actually funded by advertising today,” Schuh explained. “It has been for a very long time. Now, I’m not here to make the argument that advertising is the best or only way to fund these things. But the truth is that print, radio, and TV, – all these are funded primarily through advertising.”

And so too is the web, he insisted, arguing that advertising is what has made so much online content available to people who otherwise wouldn’t have access to it across the globe.

Schuh said in the context of the web, two trends concern him. One, he claimed, is that content is leaving because it’s easier to monetize in apps – but he didn’t cite a basis for that assertion.

The other is the rise of covert tracking, which arose, as Schuh tells it, because advertisers wanted to track people across multiple devices. So they turned to looking at IP-based fingerprinting and metadata tracking, and the joining of data sets to identify people as they shift between phone, computer, and tablet.

Covert tracking also became more popular, he said, because advertisers wanted to bypass anti-tracking mechanisms. Thus, we have privacy-invading practices like CNAME cloaking, site fingerprinting, hostname rotation, and the like because browser users sought privacy.

Schuh made the case for Google’s Privacy Sandbox proposal, a set of controversial specs being developed ostensibly to enhance privacy by reducing data available for tracking and browser fingerprinting while also giving advertisers the ability to target ads.

“Broadly speaking, advertisers don’t actually need your data,” said Schuh. “All that they really want is to monetize efficiently.”

But given the willingness of advertisers to circumvent user privacy choices, the ad industry’s consistent failure to police bad behavior, and the persistence of ad fraud and malicious ads, it’s difficult to accept that advertisers can be trusted to behave.

Tanvi Vyas, principal engineer at Mozilla, focused on the consequences of the current web ecosystem, where data is gathered to target and manipulate people. She reeled off a list of social harms arising from the status quo.

“Democracies are compromised and elections around the world are being tampered with,” she said. “Populations are manipulated and micro-targeted. Fake news is delivered to just the right audience at the right time. Discrimination flourishes, and emotional harm is inflicted on specific individuals when our algorithms go wrong.”

Thanks, Facebook, Google, and Twitter.

Worse still, Vyas said, the hostile ecosystem has a chilling effect on sophisticated users who understand online tracking and prevents them from taking action. “At Mozilla, we think this is an unacceptable cost for society to pay,” she said.

Vyas described various pro-privacy technologies implemented in Firefox, including Facebook Container, which sandboxes Facebook trackers so they can’t track users on third-party websites. She also argued for legislation to improve online privacy, though Lawrence from his days working on Internet Explorer recalled how privacy rules tied to a privacy scheme known as P3P two decades ago had proved ineffective.

Speaking for Brave, CISO Yan Zhu argued a slightly different approach, though it still involves engaging with the ad industry to some extent.

“The main goal of Brave is we want to repair the privacy problems in the existing ad ecosystem in a way that no other browser has really tried, while giving publishers a revenue stream,” she said. “Basically, we have options to set micropayments to publishers, and also an option to see privacy preserving ads.”

Micropayments have been tried before but they’ve largely failed, assuming you don’t consider in-app payments to be micropayments.

Faced with a plea from an attendee for more of the browser makers to support micropayments instead of relying on ads, Schuh said, “I would absolutely love to see micropayments succeed. I know there have been a bunch of efforts at Google and various other companies to do it. It turns out that the payment industry itself is really, really complicated. And there are players in there that expect a fairly large cut. And so long as that exists, I don’t know if there’s a path forward.”

It now falls to Brave to prove otherwise.

Shortly thereafter, Gabriel DeWitt, VP of product at global ad marketplace Index Exchange, took a turn at the mic in the audience section in which he introduced himself and then lightheartedly asked other attendees not to throw anything at him.

Insisting that his company also cares about user privacy, despite opinions to the contrary, he asked the panelists how he could better collaborate with them.

It’s worth noting that next week, when Chrome 80 debuts, Google intends to introduce changes in the way it handles cookies that will affect advertisers. What’s more, the company has said it plans to phase out cookies entirely in a few years.

Schuh, from Google, elicited a laugh when he said, “I guess I can take this one, because that’s what everyone is expecting.”

We were expecting privacy. We got surveillance capitalism instead.

Source: Brave, Google, Microsoft, Mozilla gather together to talk web privacy… and why we all shouldn’t get too much of it • The Register

Ubiquiti says UniFi routers will beam performance data back to mothership without consent automatically, no opt-out.

Ubiquiti Networks is once again under fire for suddenly rewriting its telemetry policy after changing how its UniFi routers collect data without telling anyone.

The changes were identified in a new help document published on the US manufacturer’s website. The document differentiates between “personal data”, which includes everything that identifies a specific individual, and “other data”, which is everything else.

The document says that while users can continue to opt out of having their “personal data” collected, their “other data” – anonymous performance and crash information – will be “automatically reported”. In other words, you ain’t got no choice.

This is a shift from Ubiquiti’s last statement on data collection three months ago, which promised an opt-out button for all data collection in upcoming versions of its firmware.

A Ubiquiti representative confirmed in a forum post that the changes will automatically affect all firmware beyond 4.1.0, and that users can stop “other data” being collected by manually editing the software’s config file.

“Yes, it should be updated when we go to public release, it’s on our radar,” the rep wrote. “But I can’t guarantee it will be updated in time.”

The drama unfolded when netizens grabbed their pitchforks and headed for the company’s forums to air their grievances. “Come on UBNT,” said user leonardogyn. “PLEASE do not insist on making it hard (or impossible) to fully and easily disable sending of Analytics data. I understand it’s a great tool for you, but PLEASE consider that’s [sic] ultimately us, the users, that *must* have the option to choose to participate on it.”

The same user also pointed out that, even when the “Analytics” opt-out button is selected in the 5.13.9 beta controller software, Ubiquiti is still collecting some data. The person called the opt-out option “a misleading one, not to say a complete lie”.

Other users were similarly outraged. “This was pretty much the straw that broke the camel’s back, to be honest.” said elcid89. “I only use Unifi here at the house, but between the ongoing development instability, frenetic product range, and lack of responsiveness from staff, I’ve been considering junking it for a while now. This made the decision for me – switching over to Cisco.”

One user said that the firmware was still sending their data to two addresses even after they modified the config file.

Source: You spoke, we didn’t listen: Ubiquiti says UniFi routers will beam performance data back to mothership automatically • The Register

New NZXT Liquid CPU Cooler Plays Animated GIFs, Because Awesome!

PC hardware maker NZXT has just announced the latest additions to its line of liquid CPU coolers, the Kraken X-3 and Z-3. The X-3 has a bright LED ring and rotates so the logo can be repositioned. The Z-3 comes with a 2.36-inch, 24-bit color LCD screen capable of displaying images, computer data, or animated GIFs, because maybe that is a thing people want.

The animated GIF of the CPU cooler displaying animated GIFs atop this post? With the Kraken Z-3 installed on my PC, I could display that GIF of a CPU cooler displaying GIFs as a GIF on my CPU cooler. I could put some anime there. Or maybe some looping pornography. Then I would turn my computer to the side with the glass window facing away from me and never see it again. I need a better way to display the glowing and flashing things inside of my PC. Maybe a mirror or something.

I’ve found NZXT liquid cooling quite reliable in the past. The idea of that reliability combined with this frivolity tickles me to no end. Look, they’ve even made a little trailer showing it off.

The Kraken X-3 and Z-3 are available for purchase in the U.S. starting today. The X-3 is available in 240mm, 280mm, and 360mm sizes for $130, $150, and $180. The Z03, AKA the one with the GIFs, costs $250 for the 280mm and $280 for the 360mm size. That means the ability to have an animated GIF on your CPU cooler costs $100.

Illustration for article titled New Liquid CPU Cooler Plays Animated GIFs, Because Why Not

Worth it.

Source: New Liquid CPU Cooler Plays Animated GIFs, Because Why Not

Facebook Enables Confusing ‘Off-Facebook Activity’ Privacy Tool, which won’t stop any tracking whatsoever

In a blog post earlier today, the famously privacy-conscious Mark Zuckerberg announced that—in honor of Data Privacy Day, which is apparently a thing—the official rollout of a long-awaited Off-Facebook Activity tool that allows Facebook users to monitor and manage the connections between Facebook profiles and their off-platform activity.

“To help shed more light on these practices that are common yet not always well understood, today we’re introducing a new way to view and control your off-Facebook activity,” Zuckerberg said in the post. “Off-Facebook Activity lets you see a summary of the apps and websites that send us information about your activity, and clear this information from your account if you want to.”

Zuck’s use of the phrases “control your off-Facebook activity” and “clear this information from your account” is kinda misleading—you’re not really controlling or clearing much of anything. By using this tool, you’re just telling Facebook to put the data it has on you into two separate buckets that are otherwise mixed together. Put another way, Facebook is offering a one-stop-shop to opt-out of any ties between the sites and services you peruse daily that have some sort of Facebook software installed and your own-platform activity on Facebook or Instagram.

The only thing you’re clearing is a connection Facebook made between its data and the data it gets from third parties, not the data itself.

Illustration for article titled Facebooks Clear History Tool Doesnt Clear Shit
Image: Facebook

As an ad-tech reporter, my bread and butter involves downloading shit that does god-knows-what with your data, which is why I shouldn’t’ve been surprised that Facebook hoovered data from more 520 partners across the internet—either sites I’d visited or apps I’d downloaded. For Gizmodo alone, Facebook tracked “252 interactions” drawn from the handful of plug-ins our blog has installed. (To be clear, you’re going to run into these kinds of trackers e.v.e.r.y.w.h.e.r.e.—not just on our site.)

These plug-ins—or “business tools,” as Facebook describes them—are the pipeline that the company uses to ascertain your off-platform activity and tie it to your on-platform identity. As Facebook describes it:

– Jane buys a pair of shoes from an online clothing and shoe store.

– The store shares Jane’s activity with us using our business tools.

– We receive Jane’s off-Facebook activity and we save it with her Facebook account. The activity is saved as “visited the Clothes and Shoes website” and “made a purchase”.

– Jane sees an ad on Facebook for a 10% off coupon on her next shoe or clothing purchase from the online store.

Here’s the catch, though: When I hit the handy “clear history” button that Facebook now provides, it won’t do jack shit to stop a given shoe store from sharing my data with Facebook—which explicitly laid this out for me when I hit that button:

Your activity history will be disconnected from your account. We’ll continue to receive your activity from the businesses and organizations you visit in the future.

Yes, it’s confusing. Baffling, really. But basically, Facebook has profiles on users and non-users alike. Those of you who have Facebook profiles can use the new tool to disconnect your Facebook data from the data the company receives from third parties. Facebook will still have that third-party-collected data and it will continue to collect more data, but that bucket of data won’t be connected to your Facebook identity.

Illustration for article titled Facebooks Clear History Tool Doesnt Clear Shit
Screenshot: Gizmodo (Facebook)

The data third parties collect about you technically isn’t Facebook’s responsibility, to begin with. If I buy a pair of new sneakers from Steve Madden where that purchase or browsing data goes is ultimately in Steve Madden’s metaphorical hands. And thanks to the wonders of targeted advertising, even the sneakers I’m purchasing in-store aren’t safe from being added as a data point that can be tied to the collective profile Facebook’s gathered on me as a consumer. Naturally, it behooves whoever runs marketing at Steve Madden—or anywhere, really—to plug in as many of those data points as they possibly can.

For the record, I also tried toggling my off-Facebook activity to keep it from being linked to my account, but was told that, while the company would still be getting this information from third parties, it would just be “disconnected from [my] account.”

Put another way: The way I browse any number of sites and apps will ultimately still make its way to Facebook, and still be used for targeted advertising across… those sites and apps. Only now, my on-Facebook life—the cat groups I join, the statuses I comment on, the concerts I’m “interested” in (but never actually attend)—won’t be a part of that profile.

Or put another way: Facebook just announced that it still has its tentacles in every part of your life in a way that’s impossible to untangle yourself from. Now, it just doesn’t need the social network to do it.

Source: Facebook Enables Confusing ‘Off-Facebook Activity’ Privacy Tool

Google releases new dataset search

You can now filter the results based on the types of dataset that you want (e.g., tables, images, text), or whether the dataset is available for free from the provider. If a dataset is about a geographic area, you can see the map. Plus, the product is now available on mobile and we’ve significantly improved the quality of dataset descriptions. One thing hasn’t changed however: anybody who publishes data can make their datasets discoverable in Dataset Search by using an open standard (schema.org) to describe the properties of their dataset on their own web page.

Source: Discovering millions of datasets on the web

Find it here

Leaked AVAST Documents Expose the Secretive Market for Your Web Browsing Data: Google, MS, Pepsi, they all buy it – Really, uninstall it now!

An antivirus program used by hundreds of millions of people around the world is selling highly sensitive web browsing data to many of the world’s biggest companies, a joint investigation by Motherboard and PCMag has found. Our report relies on leaked user data, contracts, and other company documents that show the sale of this data is both highly sensitive and is in many cases supposed to remain confidential between the company selling the data and the clients purchasing it.

The documents, from a subsidiary of the antivirus giant Avast called Jumpshot, shine new light on the secretive sale and supply chain of peoples’ internet browsing histories. They show that the Avast antivirus program installed on a person’s computer collects data, and that Jumpshot repackages it into various different products that are then sold to many of the largest companies in the world. Some past, present, and potential clients include Google, Yelp, Microsoft, McKinsey, Pepsi, Sephora, Home Depot, Condé Nast, Intuit, and many others. Some clients paid millions of dollars for products that include a so-called “All Clicks Feed,” which can track user behavior, clicks, and movement across websites in highly precise detail.

Avast claims to have more than 435 million active users per month, and Jumpshot says it has data from 100 million devices. Avast collects data from users that opt-in and then provides that to Jumpshot, but multiple Avast users told Motherboard they were not aware Avast sold browsing data, raising questions about how informed that consent is.

The data obtained by Motherboard and PCMag includes Google searches, lookups of locations and GPS coordinates on Google Maps, people visiting companies’ LinkedIn pages, particular YouTube videos, and people visiting porn websites. It is possible to determine from the collected data what date and time the anonymized user visited YouPorn and PornHub, and in some cases what search term they entered into the porn site and which specific video they watched.

[…]

Until recently, Avast was collecting the browsing data of its customers who had installed the company’s browser plugin, which is designed to warn users of suspicious websites. Security researcher and AdBlock Plus creator Wladimir Palant published a blog post in October showing that Avast harvest user data with that plugin. Shortly after, browser makers Mozilla, Opera, and Google removed Avast’s and subsidiary AVG’s extensions from their respective browser extension stores. Avast had previously explained this data collection and sharing in a blog and forum post in 2015. Avast has since stopped sending browsing data collected by these extensions to Jumpshot, Avast said in a statement to Motherboard and PCMag.

[…]

However, the data collection is ongoing, the source and documents indicate. Instead of harvesting information through software attached to the browser, Avast is doing it through the anti-virus software itself. Last week, months after it was spotted using its browser extensions to send data to Jumpshot, Avast began asking its existing free antivirus consumers to opt-in to data collection, according to an internal document.

“If they opt-in, that device becomes part of the Jumpshot Panel and all browser-based internet activity will be reported to Jumpshot,” an internal product handbook reads. “What URLs did these devices visit, in what order and when?” it adds, summarising what questions the product may be able to answer.

Senator Ron Wyden, who in December asked Avast why it was selling users’ browsing data, said in a statement, “It is encouraging that Avast has ended some of its most troubling practices after engaging constructively with my office. However I’m concerned that Avast has not yet committed to deleting user data that was collected and shared without the opt-in consent of its users, or to end the sale of sensitive internet browsing data. The only responsible course of action is to be fully transparent with customers going forward, and to purge data that was collected under suspect conditions in the past.”

[…]

On its website and in press releases, Jumpshot names Pepsi, and consulting giants Bain & Company and McKinsey as clients.

As well as Expedia, Intuit, and Loreal, other companies which are not already mentioned in public Jumpshot announcements include coffee company Keurig, YouTube promotion service vidIQ, and consumer insights firm Hitwise. None of those companies responded to a request for comment.

On its website, Jumpshot lists some previous case studies for using its browsing data. Magazine and digital media giant Condé Nast, for example, used Jumpshot’s products to see whether the media company’s advertisements resulted in more purchases on Amazon and elsewhere. Condé Nast did not respond to a request for comment.

ALL THE CLICKS

Jumpshot sells a variety of different products based on data collected by Avast’s antivirus software installed on users’ computers. Clients in the institutional finance sector often buy a feed of the top 10,000 domains that Avast users are visiting to try and spot trends, the product handbook reads.

Another Jumpshot product is the company’s so-called “All Click Feed.” It allows a client to buy information on all of the clicks Jumpshot has seen on a particular domain, like Amazon.com, Walmart.com, Target.com, BestBuy.com, or Ebay.com.

In a tweet sent last month intended to entice new clients, Jumpshot noted that it collects “Every search. Every click. Every buy. On every site” [emphasis Jumpshot’s.]

[…]

One company that purchased the All Clicks Feed is New York-based marketing firm Omnicom Media Group, according to a copy of its contract with Jumpshot. Omnicom paid Jumpshot $2,075,000 for access to data in 2019, the contract shows. It also included another product called “Insight Feed” for 20 different domains. The fee for data in 2020 and then 2021 is listed as $2,225,000 and $2,275,000 respectively, the document adds.

[…]

The internal product handbook says that device IDs do not change for each user, “unless a user completely uninstalls and reinstalls the security software.”

Source: Leaked Documents Expose the Secretive Market for Your Web Browsing Data – VICE

Ring Doorbell App Gives Away your data to 3rd parties, without your knowledge or consent

An investigation by EFF of the Ring doorbell app for Android found it to be packed with third-party trackers sending out a plethora of customers’ personally identifiable information (PII). Four main analytics and marketing companies were discovered to be receiving information such as the names, private IP addresses, mobile network carriers, persistent identifiers, and sensor data on the devices of paying customers.

The danger in sending even small bits of information is that analytics and tracking companies are able to combine these bits together to form a unique picture of the user’s device. This cohesive whole represents a fingerprint that follows the user as they interact with other apps and use their device, in essence providing trackers the ability to spy on what a user is doing in their digital lives and when they are doing it. All this takes place without meaningful user notification or consent and, in most cases, no way to mitigate the damage done. Even when this information is not misused and employed for precisely its stated purpose (in most cases marketing), this can lead to a whole host of social ills.

[…]

Our testing, using Ring for Android version 3.21.1, revealed PII delivery to branch.io, mixpanel.com, appsflyer.com and facebook.com. Facebook, via its Graph API, is alerted when the app is opened and upon device actions such as app deactivation after screen lock due to inactivity. Information delivered to Facebook (even if you don’t have a Facebook account) includes time zone, device model, language preferences, screen resolution, and a unique identifier (anon_id), which persists even when you reset the OS-level advertiser ID.

Branch, which describes itself as a “deep linking” platform, receives a number of unique identifiers (device_fingerprint_id, hardware_id, identity_id) as well as your device’s local IP address, model, screen resolution, and DPI.

AppsFlyer, a big data company focused on the mobile platform, is given a wide array of information upon app launch as well as certain user actions, such as interacting with the “Neighbors” section of the app. This information includes your mobile carrier, when Ring was installed and first launched, a number of unique identifiers, the app you installed from, and whether AppsFlyer tracking came preinstalled on the device. This last bit of information is presumably to determine whether AppsFlyer tracking was included as bloatware on a low-end Android device. Manufacturers often offset the costs of device production by selling consumer data, a practice that disproportionately affects low-income earners and was the subject of a recent petition to Google initiated by Privacy International and co-signed by EFF.

Most alarmingly, AppsFlyer also receives the sensors installed on your device (on our test device, this included the magnetometer, gyroscope, and accelerometer) and current calibration settings.

Ring gives MixPanel the most information by far. Users’ full names, email addresses, device information such as OS version and model, whether bluetooth is enabled, and app settings such as the number of locations a user has Ring devices installed in, are all collected and reported to MixPanel. MixPanel is briefly mentioned in Ring’s list of third party services, but the extent of their data collection is not. None of the other trackers listed in this post are mentioned at all on this page.

Ring also sends information to the Google-owned crash logging service Crashalytics. The exact extent of data sharing with this service is yet to be determined.

Source: Ring Doorbell App Packed with Third-Party Trackers | Electronic Frontier Foundation

Electric Vehicle Battery Degradation Graph with 6 years data

These guys have 6 years of battery data on a range of electric cars. Each model is different in terms of degradation, but it seems that over six years time you lose around 12% of your battery capacity. This means that if your car was able to drive, say 523 km (Tesla Model X), after 6 years you can expect it to have a range of 460km. So long as the graph continues, after 12 years you have a 397km range.

electric vehicle battery degradation

Source: Geotab – EV Battery Degradation

Class-action lawsuit filed against creepy Clearview AI startup which scraped everyones social media profiles

A lawsuit — seeking class-action status — was filed this week in Illinois against Clearview AI, a New York-based startup that has scraped social media networks for people’s photos and created one of the biggest facial recognition databases in the world.

The secretive startup was exposed last week in an explosive New York Times report which revealed how Clearview was selling access to “faceprints” and facial recognition software to law enforcement agencies across the US. The startup claimed it could identify a person based on a single photo, revealing their real name, general location, and other identifiers.

The report sparked outrage among US citizens, who had photos collected and added to the Clearview AI database without their consent. The Times reported that the company collected more than three billion photos, from sites such as Facebook, Twitter, YouTube, Venmo, and others.

This week, the company was hit with the first lawsuit in the aftermath of the New York Times exposé.

Lawsuit claims Clearview AI broke BIPA

According to a copy of the complaint obtained by ZDNet, plaintiffs claim Clearview AI broke Illinois privacy laws.

Namely, the New York startup broke the Illinois Biometric Information Privacy Act (BIPA), a law that safeguards state residents from having their biometrics data used without consent.

According to BIPA, companies must obtain explicit consent from Illinois residents before collecting or using any of their biometric information — such as the facial scans Clearview collected from people’s social media photos.

“Plaintiff and the Illinois Class retain a significant interest in ensuring that their biometric identifiers and information, which remain in Defendant Clearview’s possession, are protected from hacks and further unlawful sales and use,” the lawsuit reads.

“Plaintiff therefore seeks to remedy the harms Clearview and the individually-named defendants have already caused, to prevent further damage, and to eliminate the risks to citizens in Illinois and throughout the United States created by Clearview’s business misuse of millions of citizen’s biometric identifiers and information.”

The plaintiffs are asking the court for an injunction against Clearview to stop it from selling the biometric data of Illinois residents, a court order forcing the company to delete any Illinois residents’ data, and punitive damage, to be decided by the court at a later date.

“Defendants’ violation of BIPA was intentional or reckless or, pleaded in the alternative, negligent,” the complaint reads.

Clearview AI did not return a request for comment.

Earlier this week, US lawmakers also sought answers from the company, while Twitter sent a cease-and-desist letter demanding the startup stop collecting user photos from their site and delete any existing images.

Source: Class-action lawsuit filed against controversial Clearview AI startup | ZDNet

London Police Will Start Using Live Facial Recognition Tech Now, Big Brother becomes a computer watching you

The dystopian nightmare begins. Today, London’s Metropolitan Police Service announced it will begin deploying Live Facial Recognition (LFR) tech across the capital in the hopes of locating and arresting wanted peoples.

[…]

The way the system is supposed to work, according to the Metropolitan Police, is the LFR cameras will first be installed in areas where ‘intelligence’ suggests the agency is most likely to locate ‘serious offenders.’ Each deployment will supposedly have a ‘bespoke’ watch list comprising images of wanted suspects for serious and violent offenses. The London police also note the cameras will focus on small, targeted areas to scan folks passing by. According to BBC News, previous trials had taken place in areas such as Stratford’s Westfield shopping mall and the West End area of London. It seems likely the agency is also anticipating some unease, as the cameras will be ‘clearly signposted’ and officers are slated to hand out informational leaflets.

The agency’s statement also emphasizes that the facial recognition tech is not meant to replace policing—just ‘prompt’ officers by suggesting a person in the area may be a fishy individual…based solely on their face. “It is always the decision of an officer whether or not to engage with someone,” the statement reads. On Twitter, the agency also noted in a short video that images that don’t trigger alerts will be immediately deleted.

As with any police-related, Minority Report-esque tech, accuracy is a major concern. While the Metropolitan Police Service claims that 70 percent of suspects were successfully identified and that only one in 1,000 people created a fake alert, not everyone agrees the LFR tech is rock-solid. An independent review from July 2019 found that in six of the trial deployments, only eight of 42 matches were correct for an abysmal 19 percent accuracy. Other problems found by the review included inaccurate watch list information (e.g., people were stopped for cases that had already been resolved), and the criteria for people being included on the watchlist weren’t clearly defined.

Privacy groups aren’t particularly happy with the development. Big Brother Watch, a privacy campaign group that’s been particularly vocal against facial recognition tech, took to Twitter, telling the Metropolitan Police Service they’d “see them in court.”

“This decision represents an enormous expansion of the surveillance state and a serious threat to civil liberties in the UK,” said Silkie Carlo, Big Brother Watch’s director, in a statement. “This is a breath-taking assault on our rights and we will challenge it, including by urgently considering next steps in our ongoing legal claim against the Met and the Home Secretary.”

Meanwhile, another privacy group Liberty, has also voiced resistance to the measure. “Rejected by democracies. Embraced by oppressive regimes. Rolling out facial recognition surveillance tech is a dangerous and sinister step in giving the State unprecedented power to track and monitor any one of us. No thanks,” the group tweeted.

Source: London Police Will Start Using Live Facial Recognition Tech

GE Fridges Won’t Dispense Ice Or Water Unless Your Water Filter ‘Authenticates’ Via RFID Chip on their rip off expensive water filter

Count GE in on the “screw your customers” bandwagon. Twitter user @ShaneMorris tweeted: “My fridge has an RFID chip in the water filter, which means the generic water filter I ordered for $19 doesn’t work. My fridge will literally not dispense ice, or water. I have to pay General Electric $55 for a water filter from them.” Fortunately, there appears to be a way to hack them to work: How to Hack RWPFE Water Filters for Your GE Fridge. Hacks aside, count me out from ever buying another GE product if it includes anti-customer “features” like these. “The difference between RWPF and RPWFE is that the RPWFE has a freaking RFID chip on it,” writes Jack Busch from groovyPost. “The fridge reads the RFID chip off your filter, and if your filter is either older than 6 months or not a genuine GE RPWFE filter, it’s all ‘I’m sorry, Dave, I’m afraid I can’t dispense any water for you right now.’ Now, to be fair, GE does give you a bypass cartridge that lets you get unfiltered water for free (you didn’t throw that thing away, did you?). But come on…”

Jack proceeds to explain how you can pop off the filter bypass and “try taping the thing directly into your fridge where it would normally meet up when the filter is install.” If you’re able to get it in just the right spot, “you’re set for life,” says Jack. Alternatively, “you can tape it onto the front of an expired RPWFE GE water filter, install it backward, and then keep using it (again, not recommended for too much longer than six months). Or, you can tape it to the corresponding spot on a generic filter and reinstall it.”

Source: GE Fridges Won’t Dispense Ice Or Water Unless Your Water Filter ‘Authenticates’ Via RFID Chip – Slashdot

Sonos CEO apologizes for confusion, says legacy products will work ‘as long as possible’ – however long that is

Sonos CEO Patrick Spence just published a statement on the company’s website to try to clear up an announcement made earlier this week: on Tuesday, Sonos announced that it will cease delivering software updates and new features to its oldest products in May. The company said those devices should continue functioning properly in the near term, but it wasn’t enough to prevent an uproar from longtime customers, with many blasting Sonos for what they perceive as planned obsolescence. That frustration is what Spence is responding to today. “We heard you,” is how Spence begins the letter to customers. “We did not get this right from the start.”

Spence apologizes for any confusion and reiterates that the so-called legacy products will “continue to work as they do today.” Legacy products include the original Sonos Play:5, Zone Players, and Connect / Connect:Amp devices manufactured between 2011 and 2015.

“Many of you have invested heavily in your Sonos systems, and we intend to honor that investment for as long as possible.” Similarly, Spence pledges that Sonos will deliver bug fixes and security patches to legacy products “for as long as possible” — without any hard timeline. Most interesting, he says “if we run into something core to the experience that can’t be addressed, we’ll work to offer an alternative solution and let you know about any changes you’ll see in your experience.”

The letter from Sonos’ CEO doesn’t retract anything that the company announced earlier this week; Spence is just trying to be as clear as possible about what’s happening come May. Sonos has insisted that these products, some of which are a decade old, have been taken to their technological limits.

Spence again confirms that Sonos is planning a way for customers to fork any legacy devices they might own off of their main Sonos system with more modern speakers. (Sonos architected its system so that all devices share the same software. Once one product is no longer eligible for updates, the whole setup stops receiving them. This workaround is designed to avoid that problem.)

Source: Sonos CEO apologizes for confusion, says legacy products will work ‘as long as possible’ – The Verge

An Open Source eReader That’s Free of Corporate Restrictions Is Exactly What I Want Right Now

The Open Book Project was born from a contest held by Hackaday and that encouraged hardware hackers to find innovative and practical uses for the Arduino-based Adafruit Feather development board ecosystem. The winner of that contest was the Open Book Project which has been designed and engineered from the ground up to be everything devices like the Amazon Kindle or Rakuten Kobo are not. There are no secrets inside the Open Book, no hidden chips designed to track and share your reading habits and preferences with a faceless corporation. With enough know-how, you could theoretically build and program your own Open Book from scratch, but as a result of winning the Take Flight With Feather contest, Digi-Key will be producing a small manufacturing run of the ereader, with pricing and availability still to be revealed.

The raw hardware isn’t as sleek or pretty as devices like the Kindle, but at the same time there’s a certain appeal to the exposed circuit board which features brief descriptions of various components, ports, and connections etched right onto the board itself for those looking to tinker or upgrade the hardware. Users are encouraged to design their own enclosures for the Open Book if they prefer, either through 3D-printed cases made of plastic, or rustic wooden enclosures created using laser cutting machines.

Text will look a little aliased on the Open Book’s E Ink display.
Text will look a little aliased on the Open Book’s E Ink display.
Photo: Hackaday.io

With a resolution of just 400×300 pixels on its monochromatic E Ink display, text on the Open Book won’t look as pretty as it does on the Amazon Kindle Oasis which boasts a resolution of 1,680×1,264 pixels, but it should barely sip power from its built-in lithium-polymer rechargeable battery—a key benefit of using electronic paper.

The open source ereader—powered by an ARM Cortex M4 processor—will also include a headphone jack for listening to audio books, a dedicated flash chip for storing language files with specific character sets, and even a microphone that leverages a TensorFlow-trained AI model to intelligently process voice commands so you can quietly mutter “next!” to turn the page instead of reaching for one of the ereader’s physical buttons like a neanderthal. It can also be upgraded with additional functionality such as Bluetooth or wifi using Adafruit Feather expansion boards, but the most important feature is simply a microSD card slot allowing users to load whatever electronic text and ebook files they want. They won’t have to be limited by what a giant corporation approves for its online book store, or be subject to price-fixing schemes which, for some reason, have still resulted in electronic files costing more than printed books.

What remains to be seen is whether or not the Open Book Project can deliver an ereader that’s significantly cheaper than what Amazon or Rakuten has delivered to consumers. Both of those companies benefit from the economy of scale having sold millions of devices to date, and are able to throw their weight around when it comes to manufacturing costs and sourcing hardware. If the Open Book can be churned out for less than $50, it could potentially provide some solid competition to the limited ereader options currently out there.

Source: An Open Source eReader That’s Free of Corporate Restrictions Is Exactly What I Want Right Now

Body movement is achieved by molecular motors. A new ‘molecular nano-patterning’ technique allows us to study these motors, reveals that some motors coordinate differently

Body movement, from the muscles in your arms to the neurons transporting those signals to your brain, relies on a massive collection of proteins called molecular motors.

Fundamentally, molecular motors are proteins that convert chemical energy into mechanical movement, and have different functions depending on their task. However, because they are so small, the exact mechanisms by which these molecules coordinate with each other is poorly understood.

Publishing in Science Advances, Kyoto University’s School of Engineering has found that two types of kinesin molecular motors have different properties of coordination. Collaborating with the National Institute of Information and Communications Technology, or NICT, the findings were made possible thanks to a new tool the team developed that parks individual motors on platforms thousands of times smaller than a .

“Kinesin is a protein that is involved in actions such as cell division, muscle contractions, and flagella movement. They move along these long protein filaments called microtubules,” explains first author Taikopaul Kaneko. “In the body, kinesins work as a team to inside a cell, or allow the cell itself to move.”

To observe the coordination closely, the team constructed a device consisting of an array of gold nano-pillars 50 nanometers in diameter and spaced 200 to 1000 nanometers apart. For reference, a skin cell is about 30 micrometers, or 30,000 nanometers, in diameter.

“We then combined this array with self-assembled monolayers, or SAM, that immobilized a single kinesin molecule on each nano-pillar,” continues Kaneko. “This ‘nano-patterning’ method of motor proteins gives us control of the number and spacing of kinesins, allowing us to accurately calculate how they transport microtubules.”

The team evaluated two kinesins: kinesin-1 and kinesin-14, which are involved in intercellular transport and cell division, respectively. Their results showed that in the case of kinesin-1, neither the number nor spacing of the molecules change the transport velocity of microtubules.

In contrast, kinesin-14 decreased transport velocity as the number of motors on a filament increased, but increased as the spacing of the motors increased. The results indicate that while kinesin-1 molecules work independently, -14 interacts with each other to tune the speed of transport.

Ryuji Yokokawa who led the team was surprised by the results, “Before we started this study, we thought that more motors led to faster transport and more force. But like most things in biology, it’s rarely that simple.”

The team will be using their new nano-patterning method to study the mechanics of other kinesins and different molecular motors.

“Humans have over 40 kinesins along with two other types of molecular motors called myosin and dynein. We can even modify our array to study how these motors act in a density gradient. Our results and this new tool are sure to expand our understanding of the various basic cellular processes fundamental to all life,” concludes Yokokawa.

Source: A new ‘molecular nano-patterning’ technique reveals that some molecular motors coordinate differently

Turns out that RNA affects DNA in multiple ways. Genes don’t just send messages to RNA which then direct proteins to do stuff.

Rather than directions going one-way from DNA to RNA to proteins, the latest study shows that RNA itself modulates how DNA is transcribed—using a chemical process that is increasingly apparent to be vital to biology. The discovery has significant implications for our understanding of human disease and drug design.

[…]

The picture many of us remember learning in school is an orderly progression: DNA is transcribed into RNA, which then makes proteins that carry out the actual work of living cells. But it turns out there are a lot of wrinkles.

He’s team found that the molecules called messenger RNA, previously known as simple couriers that carry instructions from DNA to proteins, were actually making their own impacts on protein production. This is done by a reversible chemical reaction called methylation; He’s key breakthrough was showing that this methylation was reversible. It wasn’t a one-time, one-way transaction; it could be erased and reversed.

“That discovery launched us into a modern era of RNA modification research, which has really exploded in the last few years,” said He. “This is how so much of gene expression is critically affected. It impacts a wide range of biological processes—learning and memory, circadian rhythms, even something so fundamental as how a cell differentiates itself into, say, a blood cell versus a neuron.”

[…]

they began to see that messenger RNA methylation could not fully explain everything they observed.

This was mirrored in other experiments. “The data coming out of the community was saying there’s something else out there, something extremely important that we’re missing—that critically impacts many early development events, as well as human diseases such as cancer,” he said.

He’s team discovered that a group of RNAs called chromosome-associated regulatory RNAs, or carRNAs, was using the same methylation process, but these RNAs do not code proteins and are not directly involved in translation. Instead, they controlled how DNA itself was stored and transcribed.

“This has major implications in basic biology,” He said. “It directly affects gene transcriptions, and not just a few of them. It could induce global chromatin change and affects transcription of 6,000 genes in the cell line we studied.”

He sees major implications in biology, especially in human health—everything from identifying the genetic basis of disease to better treating patients.

“There are several biotech companies actively developing small molecule inhibitors of RNA methylation, but right now, even if we successfully develop therapies, we don’t have a full mechanical picture for what’s going on,” he said. “This provides an enormous opportunity to help guide disease indication for testing inhibitors and suggest new opportunities for pharmaceuticals.”

Source: Surprise discovery shakes up our understanding of gene expression

Sorry to be blunt about this… Open AWS S3 storage bucket just made 30,000 potheads’ privacy go up in smoke

Personal records, including scans of ID cards and purchase details, for more than 30,000 people were exposed to the public internet from this unsecured cloud silo, we’re told. In addition to full names and pictures of customer ID cards, the 85,000 file collection is said to include email and mailing address, phone numbers, dates of birth, and the maximum amount of cannabis an individual is allowed to purchase. All available to download, unencrypted, if you knew where to look.

Because many US states have strict record-keeping requirements written into their marijuana legalization laws, dispensaries have to manage a certain amount of customer and inventory information. In the case of THSuite, those records were put into an S3 bucket that was left accessible to the open internet – including the Shodan.io search engine.

The bucket was taken offline last week after it was discovered on December 24, and its insecure configuration was reported to THSuite on December 26 and Amazon on January 7, according to vpnMentor. The S3 bucket’s data belonged to dispensaries in Maryland, Ohio, and Colorado, we’re told.

Source: Sorry to be blunt about this… Open AWS S3 storage bucket just made 30,000 potheads’ privacy go up in smoke • The Register

These VIPs May Want to Make Sure Mohammed bin Salman Didn’t Hack Them

In early 2018, Saudi Crown Prince Mohammed bin Salman took a sweeping tour of the U.S. as part of a strategy to rebrand Saudi Arabia’s ruling monarchy as a modernizing force and pull off his “Vision 2030” plan—hobnobbing with a list of corporate execs and politicians that reads like a who’s who list of the U.S. elite.

[…]

Bezos was one of the individuals that bin Salman met with during his trip to the U.S., and at the time, Amazon was considering investments in Saudi Arabia. Those plans went south after the Khashoggi murder, but a quick scan of the crown prince’s 2018 itinerary reveals others corporate leaders and politicians eager to get into his good graces.

These people may want to have their phones examined.

According to the New York Times, the crown prince started off with a meeting in D.C. with Donald Trump and his son-in-law Jared Kushner (the latter of whom may have real reason to worry due to his WhatsApp conversations with bin Salman). Politicians who met with him include Vice President Mike Pence, then-International Monetary Fund chief Christine Lagarde, and United Nations Secretary-General António Guterres, the Guardian reported. He also met with former Senator John Kerry and former President Bill Clinton, as well as the two former President Bushes.

While touting the importance of investment in Saudi Arabian projects including Neom, bin Salman’s plans for some kind of wonder city, the crown prince met with 40 U.S. business leaders. He also met with Goldman Sachs CEO Lloyd Blankfein and former New York mayor Michael Bloomberg, a 2020 presidential candidate, in New York.

One-on-one meetings included hanging out with Microsoft CEO Satya Nadella during the Seattle wing of the crown prince’s trip, as well as Microsoft co-founder Bill Gates.

[…]

Rupert Murdoch, as well as bevy of prominent Hollywood personalities including Disney CEO Bob Iger, Universal film chairman Jeff Shell, Fox executive Peter Rice and film studio chief Stacey Snider, according to the Hollywood Reporter. Also present were Warner Bros. CEO Kevin Tsujihara, Nat Geo CEO Courtney Monroe, filmmakers James Cameron and Ridley Scott, and actors Morgan Freeman, Michael Douglas, and Dwayne “The Rock” Johnson.

During another leg of his trip in San Francisco, bin Salman met with Apple CEO Tim Cook as well as chief operating officer Jeff Williams, head of environment, policy, and social initiatives Lisa Jackson, and former retail chief Angela Ahrendts.

But to be fair, he also met Google co-founders Larry Page and Sergey Brin as well as current CEO Sundar Pichai.

[…]

ominous data analytics firm Palantir and met with its founder, venture capitalist Peter Thiel.

[…]

venture capitalists, including Andreessen Horowitz co-founder Marc Andreessen, Y Combinator chairman Sam Altman, and Sun Microsystems co-founder Vinod Khosla, according to Business Insider. Photos and the New York Times show that LinkedIn co-founder Reid Hoffman was also present.

Finally, bin Salman also met with Virgin Group founder Richard Branson and Magic Leap CEO Rony Abovitz.

During an earlier visit to the states in June 2016, bin Salman met with President Barack Obama before he traveled to San Francisco. At that time the crown prince visited Facebook and met CEO Mark Zuckerberg

[…]

At that time, the crown prince also met with Khan Academy CEO Salman Khan and then-Uber CEO Travis Kalanick,

[…]

then-SeaWorld CEO Joel Manby

Source: These VIPs May Want to Make Sure Mohammed bin Salman Didn’t Hack Them

Clearview has scraped all social media sites illegally and vs TOS, has all your pictures in a massive database (who knows how secure this is?) and a face recognition AI. Is selling access to it to cops, and who knows who else.

What if a stranger could snap your picture on the sidewalk then use an app to quickly discover your name, address and other details? A startup called Clearview AI has made that possible, and its app is currently being used by hundreds of law enforcement agencies in the US, including the FBI, says a Saturday report in The New York Times.

The app, says the Times, works by comparing a photo to a database of more than 3 billion pictures that Clearview says it’s scraped off Facebook, Venmo, YouTube and other sites. It then serves up matches, along with links to the sites where those database photos originally appeared. A name might easily be unearthed, and from there other info could be dug up online.

The size of the Clearview database dwarfs others in use by law enforcement. The FBI’s own database, which taps passport and driver’s license photos, is one of the largest, with over 641 million images of US citizens.

The Clearview app isn’t currently available to the public, but the Times says police officers and Clearview investors think it will be in the future.

The startup said in a statement Tuesday that its “technology is intended only for use by law enforcement and security personnel. It is not intended for use by the general public.”

Source: Clearview app lets strangers find your name, info with snap of a photo, report says – CNET

Using the system involves uploading photos to Clearview AI’s servers, and it’s unclear how secure these are. Although Clearview AI says its customer-support employees will not look at the photos that are uploaded, it appeared to be aware that Kashmir Hill (the Times journalist investigating the piece) was having police search for her face as part of her reporting:

While the company was dodging me, it was also monitoring me. At my request, a number of police officers had run my photo through the Clearview app. They soon received phone calls from company representatives asking if they were talking to the media — a sign that Clearview has the ability and, in this case, the appetite to monitor whom law enforcement is searching for.

The Times reports that the system appears to have gone viral with police departments, with over 600 already signed up. Although there’s been no independent verification of its accuracy, Hill says the system was able to identify photos of her even when she covered the lower half of her face, and that it managed to find photographs of her that she’d never seen before.

One expert quoted by The Times said that the amount of money involved with these systems means that they need to be banned before the abuse of them becomes more widespread. “We’ve relied on industry efforts to self-police and not embrace such a risky technology, but now those dams are breaking because there is so much money on the table,” said a professor of law and computer science at Northeastern University, Woodrow Hartzog, “I don’t see a future where we harness the benefits of face recognition technology without the crippling abuse of the surveillance that comes with it. The only way to stop it is to ban it.”

Source: The Verge

So Clearview has you, even if it violates TOS. How to stop the next guy from getting you in FB – maybe.

It should come as little surprise that any content you offer to the web for public consumption has the potential to be scraped and misused by anyone clever enough to do it. And while that doesn’t make this weekend’s report from The New York Times any less damning, it’s a great reminder about how important it is to really go through the settings for your various social networks and limit how your content is, or can be, accessed by anyone.

I won’t get too deep into the Times’ report; it’s worth reading on its own, since it involves a company (Clearview AI) scraping more than three billion images from millions of websites, including Facebook, and creating a facial-recognition app that does a pretty solid job of identifying people using images from this massive database.

Even though Clearview’s scraping techniques technically violate the terms of service on a number of websites, that hasn’t stopped the company from acquiring images en masse. And it keeps whatever it finds, which means that turning all your online data private isn’t going to help if Clearview has already scanned and grabbed your photos.

Still, something is better than nothing. On Facebook, likely the largest stash of your images, you’re going to want to visit Settings > Privacy and look for the option described: “Do you want search engines outside of Facebook to link to your profile?”

Turn that off, and Clearview won’t be able to grab your images. That’s not the setting I would have expected to use, I confess, which makes me want to go through all of my social networks and rethink how the information I share with them flows out to the greater web.

Lock down your Facebook even more with these settings

Since we’re already here, it’s worth spending a few minutes wading through Facebook’s settings and making sure as much of your content is set to friends-only as possible. That includes changing “Who can see your future posts” to “friends,” using the “Limit Past Posts” option to change everything you’ve previously posted to friends-only, and making sure that only you can see your friends list—to prevent any potential scraping and linking that some third-party might attempt. Similarly, make sure only your friends (or friends of friends) can look you up via your email address or phone number. (You never know!)

You should then visit the Timeline and Tagging settings page and make a few more changes. That includes only allowing friends to see what other people post on your timeline, as well as posts you’re tagged in. And because I’m a bit sensitive about all the crap people tag me in on Facebook, I’d turn on the “Review” options, too. That won’t help your account from being scraped, but it’s a great way to exert more control over your timeline.

Illustration for article titled Change These Facebook Settings to Protect Your Photos From Facial Recognition Software
Screenshot: David Murphy

Finally, even though it also doesn’t prevent companies from scraping your account, pull up the Public postssection of Facebook’s settings page and limit who is allowed to follow you (if you desire). You should also restrict who can comment or like your public information, like posts or other details about your life you share openly on the service.

Illustration for article titled Change These Facebook Settings to Protect Your Photos From Facial Recognition Software
Screenshot: David Murphy

Once I fix Facebook, then what?

Here’s the annoying part. Were I you, I’d take an afternoon or evening and write out all the different places I typically share snippets of my life online. For most, maybe that’s probably a handful of social services: Facebook, Instagram, Twitter, YouTube, Flickr, et cetera.

Once you’ve created your list, I’d dig deep into the settings of each service and see what options you have, if any, for limiting the availability of your content. This might run contrary to how you use the service—if you’re trying to gain lots of Instagram followers, for example, locking your profile to “private” and requiring potential followers to request access might slow your attempts to become the next big Insta-star. However, it should also prevent anyone with a crafty scraping utility to mass-download your photos (and associate them with you, either through some fancy facial-recognition tech, or by linking them to your account).

Source: Change These Facebook Settings to Protect Your Photos From Facial Recognition Software