Brave, Google, Microsoft, Mozilla gather together to talk web privacy… and why we all shouldn’t get too much of it. Only FF and Brave will give you some.

At the USENIX Enigma conference on Tuesday, representatives of four browser makers, Brave, Google, Microsoft, and Mozilla, gathered to banter about their respective approaches to online privacy, while urging people not to ask for too much of it.

Apple, which has advanced browser privacy standards but was recently informed that its tracking defenses can be used for er, tracking, was conspicuously absent, though it had a tongue-tied representative recruiting for privacy-oriented job positions at the show.

The browser-focused back-and-forth was mostly cordial as the software engineers representing their companies discussed notable privacy features in the various web browsers they worked on. They stressed the benefit of collaboration on web standards and the mutually beneficial effects of competition.

Eric Lawrence, program manager on the Microsoft Edge team, touched on how Microsoft has just jettisoned 25 years of Internet Explorer code to replatform Edge on the open source Chromium project, now the common foundation for 20 or so browsers.

Beside a slide that declared “Microsoft loves the Web,” Lawrence made the case for the new Edge as a modern browser with some well-designed privacy features, including Microsoft’s take on tracking protection, which blocks most trackers in its default setting and can be made more strict, at the potential cost of site compatibility.

A slide at Enigma 2020 saying Microsoft loves the Web;

Edge comes across as a reliable alternative to Chrome and should become more distinct as it evolves. It occupies a difficult space on the privacy continuum, in that it has some nice privacy features but not as many as Brave or Firefox. But Edge may find fans on the strength of the Microsoft brand since, as Lawrence emphasized, Microsoft is not new to privacy concerns.

That said, Microsoft is not far from Google in advocating not biting the hand that feeds the web ecosystem – advertising.

“The web doesn’t exist in a vacuum,” Lawrence warned. “People who are building sites and services have choices for what platforms they target. They can build a mobile application. They can take their content off the open web and put it into a walled garden. And so if we do things with privacy that hurt the open web, we could end up pushing people to less privacy for certain ecosystems.”

Lawrence pointed to a recent report about a popular Android app found to be leaking data. It took time to figure that out, he said, because mobile platforms are less transparent than the web, where it’s easier to scour source code and analyze network behavior.

Justin Schuh, engineering director on Google Chrome for trust and safety, reprised an argument he’s made previously that too much privacy would be harmful to ad-supported businesses.

“Most of the media that we consume is actually funded by advertising today,” Schuh explained. “It has been for a very long time. Now, I’m not here to make the argument that advertising is the best or only way to fund these things. But the truth is that print, radio, and TV, – all these are funded primarily through advertising.”

And so too is the web, he insisted, arguing that advertising is what has made so much online content available to people who otherwise wouldn’t have access to it across the globe.

Schuh said in the context of the web, two trends concern him. One, he claimed, is that content is leaving because it’s easier to monetize in apps – but he didn’t cite a basis for that assertion.

The other is the rise of covert tracking, which arose, as Schuh tells it, because advertisers wanted to track people across multiple devices. So they turned to looking at IP-based fingerprinting and metadata tracking, and the joining of data sets to identify people as they shift between phone, computer, and tablet.

Covert tracking also became more popular, he said, because advertisers wanted to bypass anti-tracking mechanisms. Thus, we have privacy-invading practices like CNAME cloaking, site fingerprinting, hostname rotation, and the like because browser users sought privacy.

Schuh made the case for Google’s Privacy Sandbox proposal, a set of controversial specs being developed ostensibly to enhance privacy by reducing data available for tracking and browser fingerprinting while also giving advertisers the ability to target ads.

“Broadly speaking, advertisers don’t actually need your data,” said Schuh. “All that they really want is to monetize efficiently.”

But given the willingness of advertisers to circumvent user privacy choices, the ad industry’s consistent failure to police bad behavior, and the persistence of ad fraud and malicious ads, it’s difficult to accept that advertisers can be trusted to behave.

Tanvi Vyas, principal engineer at Mozilla, focused on the consequences of the current web ecosystem, where data is gathered to target and manipulate people. She reeled off a list of social harms arising from the status quo.

“Democracies are compromised and elections around the world are being tampered with,” she said. “Populations are manipulated and micro-targeted. Fake news is delivered to just the right audience at the right time. Discrimination flourishes, and emotional harm is inflicted on specific individuals when our algorithms go wrong.”

Thanks, Facebook, Google, and Twitter.

Worse still, Vyas said, the hostile ecosystem has a chilling effect on sophisticated users who understand online tracking and prevents them from taking action. “At Mozilla, we think this is an unacceptable cost for society to pay,” she said.

Vyas described various pro-privacy technologies implemented in Firefox, including Facebook Container, which sandboxes Facebook trackers so they can’t track users on third-party websites. She also argued for legislation to improve online privacy, though Lawrence from his days working on Internet Explorer recalled how privacy rules tied to a privacy scheme known as P3P two decades ago had proved ineffective.

Speaking for Brave, CISO Yan Zhu argued a slightly different approach, though it still involves engaging with the ad industry to some extent.

“The main goal of Brave is we want to repair the privacy problems in the existing ad ecosystem in a way that no other browser has really tried, while giving publishers a revenue stream,” she said. “Basically, we have options to set micropayments to publishers, and also an option to see privacy preserving ads.”

Micropayments have been tried before but they’ve largely failed, assuming you don’t consider in-app payments to be micropayments.

Faced with a plea from an attendee for more of the browser makers to support micropayments instead of relying on ads, Schuh said, “I would absolutely love to see micropayments succeed. I know there have been a bunch of efforts at Google and various other companies to do it. It turns out that the payment industry itself is really, really complicated. And there are players in there that expect a fairly large cut. And so long as that exists, I don’t know if there’s a path forward.”

It now falls to Brave to prove otherwise.

Shortly thereafter, Gabriel DeWitt, VP of product at global ad marketplace Index Exchange, took a turn at the mic in the audience section in which he introduced himself and then lightheartedly asked other attendees not to throw anything at him.

Insisting that his company also cares about user privacy, despite opinions to the contrary, he asked the panelists how he could better collaborate with them.

It’s worth noting that next week, when Chrome 80 debuts, Google intends to introduce changes in the way it handles cookies that will affect advertisers. What’s more, the company has said it plans to phase out cookies entirely in a few years.

Schuh, from Google, elicited a laugh when he said, “I guess I can take this one, because that’s what everyone is expecting.”

We were expecting privacy. We got surveillance capitalism instead.

Source: Brave, Google, Microsoft, Mozilla gather together to talk web privacy… and why we all shouldn’t get too much of it • The Register

Ubiquiti says UniFi routers will beam performance data back to mothership without consent automatically, no opt-out.

Ubiquiti Networks is once again under fire for suddenly rewriting its telemetry policy after changing how its UniFi routers collect data without telling anyone.

The changes were identified in a new help document published on the US manufacturer’s website. The document differentiates between “personal data”, which includes everything that identifies a specific individual, and “other data”, which is everything else.

The document says that while users can continue to opt out of having their “personal data” collected, their “other data” – anonymous performance and crash information – will be “automatically reported”. In other words, you ain’t got no choice.

This is a shift from Ubiquiti’s last statement on data collection three months ago, which promised an opt-out button for all data collection in upcoming versions of its firmware.

A Ubiquiti representative confirmed in a forum post that the changes will automatically affect all firmware beyond 4.1.0, and that users can stop “other data” being collected by manually editing the software’s config file.

“Yes, it should be updated when we go to public release, it’s on our radar,” the rep wrote. “But I can’t guarantee it will be updated in time.”

The drama unfolded when netizens grabbed their pitchforks and headed for the company’s forums to air their grievances. “Come on UBNT,” said user leonardogyn. “PLEASE do not insist on making it hard (or impossible) to fully and easily disable sending of Analytics data. I understand it’s a great tool for you, but PLEASE consider that’s [sic] ultimately us, the users, that *must* have the option to choose to participate on it.”

The same user also pointed out that, even when the “Analytics” opt-out button is selected in the 5.13.9 beta controller software, Ubiquiti is still collecting some data. The person called the opt-out option “a misleading one, not to say a complete lie”.

Other users were similarly outraged. “This was pretty much the straw that broke the camel’s back, to be honest.” said elcid89. “I only use Unifi here at the house, but between the ongoing development instability, frenetic product range, and lack of responsiveness from staff, I’ve been considering junking it for a while now. This made the decision for me – switching over to Cisco.”

One user said that the firmware was still sending their data to two addresses even after they modified the config file.

Source: You spoke, we didn’t listen: Ubiquiti says UniFi routers will beam performance data back to mothership automatically • The Register

New NZXT Liquid CPU Cooler Plays Animated GIFs, Because Awesome!

PC hardware maker NZXT has just announced the latest additions to its line of liquid CPU coolers, the Kraken X-3 and Z-3. The X-3 has a bright LED ring and rotates so the logo can be repositioned. The Z-3 comes with a 2.36-inch, 24-bit color LCD screen capable of displaying images, computer data, or animated GIFs, because maybe that is a thing people want.

The animated GIF of the CPU cooler displaying animated GIFs atop this post? With the Kraken Z-3 installed on my PC, I could display that GIF of a CPU cooler displaying GIFs as a GIF on my CPU cooler. I could put some anime there. Or maybe some looping pornography. Then I would turn my computer to the side with the glass window facing away from me and never see it again. I need a better way to display the glowing and flashing things inside of my PC. Maybe a mirror or something.

I’ve found NZXT liquid cooling quite reliable in the past. The idea of that reliability combined with this frivolity tickles me to no end. Look, they’ve even made a little trailer showing it off.

The Kraken X-3 and Z-3 are available for purchase in the U.S. starting today. The X-3 is available in 240mm, 280mm, and 360mm sizes for $130, $150, and $180. The Z03, AKA the one with the GIFs, costs $250 for the 280mm and $280 for the 360mm size. That means the ability to have an animated GIF on your CPU cooler costs $100.

Illustration for article titled New Liquid CPU Cooler Plays Animated GIFs, Because Why Not

Worth it.

Source: New Liquid CPU Cooler Plays Animated GIFs, Because Why Not

Facebook Enables Confusing ‘Off-Facebook Activity’ Privacy Tool, which won’t stop any tracking whatsoever

In a blog post earlier today, the famously privacy-conscious Mark Zuckerberg announced that—in honor of Data Privacy Day, which is apparently a thing—the official rollout of a long-awaited Off-Facebook Activity tool that allows Facebook users to monitor and manage the connections between Facebook profiles and their off-platform activity.

“To help shed more light on these practices that are common yet not always well understood, today we’re introducing a new way to view and control your off-Facebook activity,” Zuckerberg said in the post. “Off-Facebook Activity lets you see a summary of the apps and websites that send us information about your activity, and clear this information from your account if you want to.”

Zuck’s use of the phrases “control your off-Facebook activity” and “clear this information from your account” is kinda misleading—you’re not really controlling or clearing much of anything. By using this tool, you’re just telling Facebook to put the data it has on you into two separate buckets that are otherwise mixed together. Put another way, Facebook is offering a one-stop-shop to opt-out of any ties between the sites and services you peruse daily that have some sort of Facebook software installed and your own-platform activity on Facebook or Instagram.

The only thing you’re clearing is a connection Facebook made between its data and the data it gets from third parties, not the data itself.

Illustration for article titled Facebooks Clear History Tool Doesnt Clear Shit
Image: Facebook

As an ad-tech reporter, my bread and butter involves downloading shit that does god-knows-what with your data, which is why I shouldn’t’ve been surprised that Facebook hoovered data from more 520 partners across the internet—either sites I’d visited or apps I’d downloaded. For Gizmodo alone, Facebook tracked “252 interactions” drawn from the handful of plug-ins our blog has installed. (To be clear, you’re going to run into these kinds of trackers e.v.e.r.y.w.h.e.r.e.—not just on our site.)

These plug-ins—or “business tools,” as Facebook describes them—are the pipeline that the company uses to ascertain your off-platform activity and tie it to your on-platform identity. As Facebook describes it:

– Jane buys a pair of shoes from an online clothing and shoe store.

– The store shares Jane’s activity with us using our business tools.

– We receive Jane’s off-Facebook activity and we save it with her Facebook account. The activity is saved as “visited the Clothes and Shoes website” and “made a purchase”.

– Jane sees an ad on Facebook for a 10% off coupon on her next shoe or clothing purchase from the online store.

Here’s the catch, though: When I hit the handy “clear history” button that Facebook now provides, it won’t do jack shit to stop a given shoe store from sharing my data with Facebook—which explicitly laid this out for me when I hit that button:

Your activity history will be disconnected from your account. We’ll continue to receive your activity from the businesses and organizations you visit in the future.

Yes, it’s confusing. Baffling, really. But basically, Facebook has profiles on users and non-users alike. Those of you who have Facebook profiles can use the new tool to disconnect your Facebook data from the data the company receives from third parties. Facebook will still have that third-party-collected data and it will continue to collect more data, but that bucket of data won’t be connected to your Facebook identity.

Illustration for article titled Facebooks Clear History Tool Doesnt Clear Shit
Screenshot: Gizmodo (Facebook)

The data third parties collect about you technically isn’t Facebook’s responsibility, to begin with. If I buy a pair of new sneakers from Steve Madden where that purchase or browsing data goes is ultimately in Steve Madden’s metaphorical hands. And thanks to the wonders of targeted advertising, even the sneakers I’m purchasing in-store aren’t safe from being added as a data point that can be tied to the collective profile Facebook’s gathered on me as a consumer. Naturally, it behooves whoever runs marketing at Steve Madden—or anywhere, really—to plug in as many of those data points as they possibly can.

For the record, I also tried toggling my off-Facebook activity to keep it from being linked to my account, but was told that, while the company would still be getting this information from third parties, it would just be “disconnected from [my] account.”

Put another way: The way I browse any number of sites and apps will ultimately still make its way to Facebook, and still be used for targeted advertising across… those sites and apps. Only now, my on-Facebook life—the cat groups I join, the statuses I comment on, the concerts I’m “interested” in (but never actually attend)—won’t be a part of that profile.

Or put another way: Facebook just announced that it still has its tentacles in every part of your life in a way that’s impossible to untangle yourself from. Now, it just doesn’t need the social network to do it.

Source: Facebook Enables Confusing ‘Off-Facebook Activity’ Privacy Tool

Google releases new dataset search

You can now filter the results based on the types of dataset that you want (e.g., tables, images, text), or whether the dataset is available for free from the provider. If a dataset is about a geographic area, you can see the map. Plus, the product is now available on mobile and we’ve significantly improved the quality of dataset descriptions. One thing hasn’t changed however: anybody who publishes data can make their datasets discoverable in Dataset Search by using an open standard (schema.org) to describe the properties of their dataset on their own web page.

Source: Discovering millions of datasets on the web

Find it here

Leaked AVAST Documents Expose the Secretive Market for Your Web Browsing Data: Google, MS, Pepsi, they all buy it – Really, uninstall it now!

An antivirus program used by hundreds of millions of people around the world is selling highly sensitive web browsing data to many of the world’s biggest companies, a joint investigation by Motherboard and PCMag has found. Our report relies on leaked user data, contracts, and other company documents that show the sale of this data is both highly sensitive and is in many cases supposed to remain confidential between the company selling the data and the clients purchasing it.

The documents, from a subsidiary of the antivirus giant Avast called Jumpshot, shine new light on the secretive sale and supply chain of peoples’ internet browsing histories. They show that the Avast antivirus program installed on a person’s computer collects data, and that Jumpshot repackages it into various different products that are then sold to many of the largest companies in the world. Some past, present, and potential clients include Google, Yelp, Microsoft, McKinsey, Pepsi, Sephora, Home Depot, Condé Nast, Intuit, and many others. Some clients paid millions of dollars for products that include a so-called “All Clicks Feed,” which can track user behavior, clicks, and movement across websites in highly precise detail.

Avast claims to have more than 435 million active users per month, and Jumpshot says it has data from 100 million devices. Avast collects data from users that opt-in and then provides that to Jumpshot, but multiple Avast users told Motherboard they were not aware Avast sold browsing data, raising questions about how informed that consent is.

The data obtained by Motherboard and PCMag includes Google searches, lookups of locations and GPS coordinates on Google Maps, people visiting companies’ LinkedIn pages, particular YouTube videos, and people visiting porn websites. It is possible to determine from the collected data what date and time the anonymized user visited YouPorn and PornHub, and in some cases what search term they entered into the porn site and which specific video they watched.

[…]

Until recently, Avast was collecting the browsing data of its customers who had installed the company’s browser plugin, which is designed to warn users of suspicious websites. Security researcher and AdBlock Plus creator Wladimir Palant published a blog post in October showing that Avast harvest user data with that plugin. Shortly after, browser makers Mozilla, Opera, and Google removed Avast’s and subsidiary AVG’s extensions from their respective browser extension stores. Avast had previously explained this data collection and sharing in a blog and forum post in 2015. Avast has since stopped sending browsing data collected by these extensions to Jumpshot, Avast said in a statement to Motherboard and PCMag.

[…]

However, the data collection is ongoing, the source and documents indicate. Instead of harvesting information through software attached to the browser, Avast is doing it through the anti-virus software itself. Last week, months after it was spotted using its browser extensions to send data to Jumpshot, Avast began asking its existing free antivirus consumers to opt-in to data collection, according to an internal document.

“If they opt-in, that device becomes part of the Jumpshot Panel and all browser-based internet activity will be reported to Jumpshot,” an internal product handbook reads. “What URLs did these devices visit, in what order and when?” it adds, summarising what questions the product may be able to answer.

Senator Ron Wyden, who in December asked Avast why it was selling users’ browsing data, said in a statement, “It is encouraging that Avast has ended some of its most troubling practices after engaging constructively with my office. However I’m concerned that Avast has not yet committed to deleting user data that was collected and shared without the opt-in consent of its users, or to end the sale of sensitive internet browsing data. The only responsible course of action is to be fully transparent with customers going forward, and to purge data that was collected under suspect conditions in the past.”

[…]

On its website and in press releases, Jumpshot names Pepsi, and consulting giants Bain & Company and McKinsey as clients.

As well as Expedia, Intuit, and Loreal, other companies which are not already mentioned in public Jumpshot announcements include coffee company Keurig, YouTube promotion service vidIQ, and consumer insights firm Hitwise. None of those companies responded to a request for comment.

On its website, Jumpshot lists some previous case studies for using its browsing data. Magazine and digital media giant Condé Nast, for example, used Jumpshot’s products to see whether the media company’s advertisements resulted in more purchases on Amazon and elsewhere. Condé Nast did not respond to a request for comment.

ALL THE CLICKS

Jumpshot sells a variety of different products based on data collected by Avast’s antivirus software installed on users’ computers. Clients in the institutional finance sector often buy a feed of the top 10,000 domains that Avast users are visiting to try and spot trends, the product handbook reads.

Another Jumpshot product is the company’s so-called “All Click Feed.” It allows a client to buy information on all of the clicks Jumpshot has seen on a particular domain, like Amazon.com, Walmart.com, Target.com, BestBuy.com, or Ebay.com.

In a tweet sent last month intended to entice new clients, Jumpshot noted that it collects “Every search. Every click. Every buy. On every site” [emphasis Jumpshot’s.]

[…]

One company that purchased the All Clicks Feed is New York-based marketing firm Omnicom Media Group, according to a copy of its contract with Jumpshot. Omnicom paid Jumpshot $2,075,000 for access to data in 2019, the contract shows. It also included another product called “Insight Feed” for 20 different domains. The fee for data in 2020 and then 2021 is listed as $2,225,000 and $2,275,000 respectively, the document adds.

[…]

The internal product handbook says that device IDs do not change for each user, “unless a user completely uninstalls and reinstalls the security software.”

Source: Leaked Documents Expose the Secretive Market for Your Web Browsing Data – VICE

Ring Doorbell App Gives Away your data to 3rd parties, without your knowledge or consent

An investigation by EFF of the Ring doorbell app for Android found it to be packed with third-party trackers sending out a plethora of customers’ personally identifiable information (PII). Four main analytics and marketing companies were discovered to be receiving information such as the names, private IP addresses, mobile network carriers, persistent identifiers, and sensor data on the devices of paying customers.

The danger in sending even small bits of information is that analytics and tracking companies are able to combine these bits together to form a unique picture of the user’s device. This cohesive whole represents a fingerprint that follows the user as they interact with other apps and use their device, in essence providing trackers the ability to spy on what a user is doing in their digital lives and when they are doing it. All this takes place without meaningful user notification or consent and, in most cases, no way to mitigate the damage done. Even when this information is not misused and employed for precisely its stated purpose (in most cases marketing), this can lead to a whole host of social ills.

[…]

Our testing, using Ring for Android version 3.21.1, revealed PII delivery to branch.io, mixpanel.com, appsflyer.com and facebook.com. Facebook, via its Graph API, is alerted when the app is opened and upon device actions such as app deactivation after screen lock due to inactivity. Information delivered to Facebook (even if you don’t have a Facebook account) includes time zone, device model, language preferences, screen resolution, and a unique identifier (anon_id), which persists even when you reset the OS-level advertiser ID.

Branch, which describes itself as a “deep linking” platform, receives a number of unique identifiers (device_fingerprint_id, hardware_id, identity_id) as well as your device’s local IP address, model, screen resolution, and DPI.

AppsFlyer, a big data company focused on the mobile platform, is given a wide array of information upon app launch as well as certain user actions, such as interacting with the “Neighbors” section of the app. This information includes your mobile carrier, when Ring was installed and first launched, a number of unique identifiers, the app you installed from, and whether AppsFlyer tracking came preinstalled on the device. This last bit of information is presumably to determine whether AppsFlyer tracking was included as bloatware on a low-end Android device. Manufacturers often offset the costs of device production by selling consumer data, a practice that disproportionately affects low-income earners and was the subject of a recent petition to Google initiated by Privacy International and co-signed by EFF.

Most alarmingly, AppsFlyer also receives the sensors installed on your device (on our test device, this included the magnetometer, gyroscope, and accelerometer) and current calibration settings.

Ring gives MixPanel the most information by far. Users’ full names, email addresses, device information such as OS version and model, whether bluetooth is enabled, and app settings such as the number of locations a user has Ring devices installed in, are all collected and reported to MixPanel. MixPanel is briefly mentioned in Ring’s list of third party services, but the extent of their data collection is not. None of the other trackers listed in this post are mentioned at all on this page.

Ring also sends information to the Google-owned crash logging service Crashalytics. The exact extent of data sharing with this service is yet to be determined.

Source: Ring Doorbell App Packed with Third-Party Trackers | Electronic Frontier Foundation

Electric Vehicle Battery Degradation Graph with 6 years data

These guys have 6 years of battery data on a range of electric cars. Each model is different in terms of degradation, but it seems that over six years time you lose around 12% of your battery capacity. This means that if your car was able to drive, say 523 km (Tesla Model X), after 6 years you can expect it to have a range of 460km. So long as the graph continues, after 12 years you have a 397km range.

electric vehicle battery degradation

Source: Geotab – EV Battery Degradation

Class-action lawsuit filed against creepy Clearview AI startup which scraped everyones social media profiles

A lawsuit — seeking class-action status — was filed this week in Illinois against Clearview AI, a New York-based startup that has scraped social media networks for people’s photos and created one of the biggest facial recognition databases in the world.

The secretive startup was exposed last week in an explosive New York Times report which revealed how Clearview was selling access to “faceprints” and facial recognition software to law enforcement agencies across the US. The startup claimed it could identify a person based on a single photo, revealing their real name, general location, and other identifiers.

The report sparked outrage among US citizens, who had photos collected and added to the Clearview AI database without their consent. The Times reported that the company collected more than three billion photos, from sites such as Facebook, Twitter, YouTube, Venmo, and others.

This week, the company was hit with the first lawsuit in the aftermath of the New York Times exposé.

Lawsuit claims Clearview AI broke BIPA

According to a copy of the complaint obtained by ZDNet, plaintiffs claim Clearview AI broke Illinois privacy laws.

Namely, the New York startup broke the Illinois Biometric Information Privacy Act (BIPA), a law that safeguards state residents from having their biometrics data used without consent.

According to BIPA, companies must obtain explicit consent from Illinois residents before collecting or using any of their biometric information — such as the facial scans Clearview collected from people’s social media photos.

“Plaintiff and the Illinois Class retain a significant interest in ensuring that their biometric identifiers and information, which remain in Defendant Clearview’s possession, are protected from hacks and further unlawful sales and use,” the lawsuit reads.

“Plaintiff therefore seeks to remedy the harms Clearview and the individually-named defendants have already caused, to prevent further damage, and to eliminate the risks to citizens in Illinois and throughout the United States created by Clearview’s business misuse of millions of citizen’s biometric identifiers and information.”

The plaintiffs are asking the court for an injunction against Clearview to stop it from selling the biometric data of Illinois residents, a court order forcing the company to delete any Illinois residents’ data, and punitive damage, to be decided by the court at a later date.

“Defendants’ violation of BIPA was intentional or reckless or, pleaded in the alternative, negligent,” the complaint reads.

Clearview AI did not return a request for comment.

Earlier this week, US lawmakers also sought answers from the company, while Twitter sent a cease-and-desist letter demanding the startup stop collecting user photos from their site and delete any existing images.

Source: Class-action lawsuit filed against controversial Clearview AI startup | ZDNet

London Police Will Start Using Live Facial Recognition Tech Now, Big Brother becomes a computer watching you

The dystopian nightmare begins. Today, London’s Metropolitan Police Service announced it will begin deploying Live Facial Recognition (LFR) tech across the capital in the hopes of locating and arresting wanted peoples.

[…]

The way the system is supposed to work, according to the Metropolitan Police, is the LFR cameras will first be installed in areas where ‘intelligence’ suggests the agency is most likely to locate ‘serious offenders.’ Each deployment will supposedly have a ‘bespoke’ watch list comprising images of wanted suspects for serious and violent offenses. The London police also note the cameras will focus on small, targeted areas to scan folks passing by. According to BBC News, previous trials had taken place in areas such as Stratford’s Westfield shopping mall and the West End area of London. It seems likely the agency is also anticipating some unease, as the cameras will be ‘clearly signposted’ and officers are slated to hand out informational leaflets.

The agency’s statement also emphasizes that the facial recognition tech is not meant to replace policing—just ‘prompt’ officers by suggesting a person in the area may be a fishy individual…based solely on their face. “It is always the decision of an officer whether or not to engage with someone,” the statement reads. On Twitter, the agency also noted in a short video that images that don’t trigger alerts will be immediately deleted.

As with any police-related, Minority Report-esque tech, accuracy is a major concern. While the Metropolitan Police Service claims that 70 percent of suspects were successfully identified and that only one in 1,000 people created a fake alert, not everyone agrees the LFR tech is rock-solid. An independent review from July 2019 found that in six of the trial deployments, only eight of 42 matches were correct for an abysmal 19 percent accuracy. Other problems found by the review included inaccurate watch list information (e.g., people were stopped for cases that had already been resolved), and the criteria for people being included on the watchlist weren’t clearly defined.

Privacy groups aren’t particularly happy with the development. Big Brother Watch, a privacy campaign group that’s been particularly vocal against facial recognition tech, took to Twitter, telling the Metropolitan Police Service they’d “see them in court.”

“This decision represents an enormous expansion of the surveillance state and a serious threat to civil liberties in the UK,” said Silkie Carlo, Big Brother Watch’s director, in a statement. “This is a breath-taking assault on our rights and we will challenge it, including by urgently considering next steps in our ongoing legal claim against the Met and the Home Secretary.”

Meanwhile, another privacy group Liberty, has also voiced resistance to the measure. “Rejected by democracies. Embraced by oppressive regimes. Rolling out facial recognition surveillance tech is a dangerous and sinister step in giving the State unprecedented power to track and monitor any one of us. No thanks,” the group tweeted.

Source: London Police Will Start Using Live Facial Recognition Tech

GE Fridges Won’t Dispense Ice Or Water Unless Your Water Filter ‘Authenticates’ Via RFID Chip on their rip off expensive water filter

Count GE in on the “screw your customers” bandwagon. Twitter user @ShaneMorris tweeted: “My fridge has an RFID chip in the water filter, which means the generic water filter I ordered for $19 doesn’t work. My fridge will literally not dispense ice, or water. I have to pay General Electric $55 for a water filter from them.” Fortunately, there appears to be a way to hack them to work: How to Hack RWPFE Water Filters for Your GE Fridge. Hacks aside, count me out from ever buying another GE product if it includes anti-customer “features” like these. “The difference between RWPF and RPWFE is that the RPWFE has a freaking RFID chip on it,” writes Jack Busch from groovyPost. “The fridge reads the RFID chip off your filter, and if your filter is either older than 6 months or not a genuine GE RPWFE filter, it’s all ‘I’m sorry, Dave, I’m afraid I can’t dispense any water for you right now.’ Now, to be fair, GE does give you a bypass cartridge that lets you get unfiltered water for free (you didn’t throw that thing away, did you?). But come on…”

Jack proceeds to explain how you can pop off the filter bypass and “try taping the thing directly into your fridge where it would normally meet up when the filter is install.” If you’re able to get it in just the right spot, “you’re set for life,” says Jack. Alternatively, “you can tape it onto the front of an expired RPWFE GE water filter, install it backward, and then keep using it (again, not recommended for too much longer than six months). Or, you can tape it to the corresponding spot on a generic filter and reinstall it.”

Source: GE Fridges Won’t Dispense Ice Or Water Unless Your Water Filter ‘Authenticates’ Via RFID Chip – Slashdot

Sonos CEO apologizes for confusion, says legacy products will work ‘as long as possible’ – however long that is

Sonos CEO Patrick Spence just published a statement on the company’s website to try to clear up an announcement made earlier this week: on Tuesday, Sonos announced that it will cease delivering software updates and new features to its oldest products in May. The company said those devices should continue functioning properly in the near term, but it wasn’t enough to prevent an uproar from longtime customers, with many blasting Sonos for what they perceive as planned obsolescence. That frustration is what Spence is responding to today. “We heard you,” is how Spence begins the letter to customers. “We did not get this right from the start.”

Spence apologizes for any confusion and reiterates that the so-called legacy products will “continue to work as they do today.” Legacy products include the original Sonos Play:5, Zone Players, and Connect / Connect:Amp devices manufactured between 2011 and 2015.

“Many of you have invested heavily in your Sonos systems, and we intend to honor that investment for as long as possible.” Similarly, Spence pledges that Sonos will deliver bug fixes and security patches to legacy products “for as long as possible” — without any hard timeline. Most interesting, he says “if we run into something core to the experience that can’t be addressed, we’ll work to offer an alternative solution and let you know about any changes you’ll see in your experience.”

The letter from Sonos’ CEO doesn’t retract anything that the company announced earlier this week; Spence is just trying to be as clear as possible about what’s happening come May. Sonos has insisted that these products, some of which are a decade old, have been taken to their technological limits.

Spence again confirms that Sonos is planning a way for customers to fork any legacy devices they might own off of their main Sonos system with more modern speakers. (Sonos architected its system so that all devices share the same software. Once one product is no longer eligible for updates, the whole setup stops receiving them. This workaround is designed to avoid that problem.)

Source: Sonos CEO apologizes for confusion, says legacy products will work ‘as long as possible’ – The Verge

An Open Source eReader That’s Free of Corporate Restrictions Is Exactly What I Want Right Now

The Open Book Project was born from a contest held by Hackaday and that encouraged hardware hackers to find innovative and practical uses for the Arduino-based Adafruit Feather development board ecosystem. The winner of that contest was the Open Book Project which has been designed and engineered from the ground up to be everything devices like the Amazon Kindle or Rakuten Kobo are not. There are no secrets inside the Open Book, no hidden chips designed to track and share your reading habits and preferences with a faceless corporation. With enough know-how, you could theoretically build and program your own Open Book from scratch, but as a result of winning the Take Flight With Feather contest, Digi-Key will be producing a small manufacturing run of the ereader, with pricing and availability still to be revealed.

The raw hardware isn’t as sleek or pretty as devices like the Kindle, but at the same time there’s a certain appeal to the exposed circuit board which features brief descriptions of various components, ports, and connections etched right onto the board itself for those looking to tinker or upgrade the hardware. Users are encouraged to design their own enclosures for the Open Book if they prefer, either through 3D-printed cases made of plastic, or rustic wooden enclosures created using laser cutting machines.

Text will look a little aliased on the Open Book’s E Ink display.
Text will look a little aliased on the Open Book’s E Ink display.
Photo: Hackaday.io

With a resolution of just 400×300 pixels on its monochromatic E Ink display, text on the Open Book won’t look as pretty as it does on the Amazon Kindle Oasis which boasts a resolution of 1,680×1,264 pixels, but it should barely sip power from its built-in lithium-polymer rechargeable battery—a key benefit of using electronic paper.

The open source ereader—powered by an ARM Cortex M4 processor—will also include a headphone jack for listening to audio books, a dedicated flash chip for storing language files with specific character sets, and even a microphone that leverages a TensorFlow-trained AI model to intelligently process voice commands so you can quietly mutter “next!” to turn the page instead of reaching for one of the ereader’s physical buttons like a neanderthal. It can also be upgraded with additional functionality such as Bluetooth or wifi using Adafruit Feather expansion boards, but the most important feature is simply a microSD card slot allowing users to load whatever electronic text and ebook files they want. They won’t have to be limited by what a giant corporation approves for its online book store, or be subject to price-fixing schemes which, for some reason, have still resulted in electronic files costing more than printed books.

What remains to be seen is whether or not the Open Book Project can deliver an ereader that’s significantly cheaper than what Amazon or Rakuten has delivered to consumers. Both of those companies benefit from the economy of scale having sold millions of devices to date, and are able to throw their weight around when it comes to manufacturing costs and sourcing hardware. If the Open Book can be churned out for less than $50, it could potentially provide some solid competition to the limited ereader options currently out there.

Source: An Open Source eReader That’s Free of Corporate Restrictions Is Exactly What I Want Right Now

Body movement is achieved by molecular motors. A new ‘molecular nano-patterning’ technique allows us to study these motors, reveals that some motors coordinate differently

Body movement, from the muscles in your arms to the neurons transporting those signals to your brain, relies on a massive collection of proteins called molecular motors.

Fundamentally, molecular motors are proteins that convert chemical energy into mechanical movement, and have different functions depending on their task. However, because they are so small, the exact mechanisms by which these molecules coordinate with each other is poorly understood.

Publishing in Science Advances, Kyoto University’s School of Engineering has found that two types of kinesin molecular motors have different properties of coordination. Collaborating with the National Institute of Information and Communications Technology, or NICT, the findings were made possible thanks to a new tool the team developed that parks individual motors on platforms thousands of times smaller than a .

“Kinesin is a protein that is involved in actions such as cell division, muscle contractions, and flagella movement. They move along these long protein filaments called microtubules,” explains first author Taikopaul Kaneko. “In the body, kinesins work as a team to inside a cell, or allow the cell itself to move.”

To observe the coordination closely, the team constructed a device consisting of an array of gold nano-pillars 50 nanometers in diameter and spaced 200 to 1000 nanometers apart. For reference, a skin cell is about 30 micrometers, or 30,000 nanometers, in diameter.

“We then combined this array with self-assembled monolayers, or SAM, that immobilized a single kinesin molecule on each nano-pillar,” continues Kaneko. “This ‘nano-patterning’ method of motor proteins gives us control of the number and spacing of kinesins, allowing us to accurately calculate how they transport microtubules.”

The team evaluated two kinesins: kinesin-1 and kinesin-14, which are involved in intercellular transport and cell division, respectively. Their results showed that in the case of kinesin-1, neither the number nor spacing of the molecules change the transport velocity of microtubules.

In contrast, kinesin-14 decreased transport velocity as the number of motors on a filament increased, but increased as the spacing of the motors increased. The results indicate that while kinesin-1 molecules work independently, -14 interacts with each other to tune the speed of transport.

Ryuji Yokokawa who led the team was surprised by the results, “Before we started this study, we thought that more motors led to faster transport and more force. But like most things in biology, it’s rarely that simple.”

The team will be using their new nano-patterning method to study the mechanics of other kinesins and different molecular motors.

“Humans have over 40 kinesins along with two other types of molecular motors called myosin and dynein. We can even modify our array to study how these motors act in a density gradient. Our results and this new tool are sure to expand our understanding of the various basic cellular processes fundamental to all life,” concludes Yokokawa.

Source: A new ‘molecular nano-patterning’ technique reveals that some molecular motors coordinate differently

Turns out that RNA affects DNA in multiple ways. Genes don’t just send messages to RNA which then direct proteins to do stuff.

Rather than directions going one-way from DNA to RNA to proteins, the latest study shows that RNA itself modulates how DNA is transcribed—using a chemical process that is increasingly apparent to be vital to biology. The discovery has significant implications for our understanding of human disease and drug design.

[…]

The picture many of us remember learning in school is an orderly progression: DNA is transcribed into RNA, which then makes proteins that carry out the actual work of living cells. But it turns out there are a lot of wrinkles.

He’s team found that the molecules called messenger RNA, previously known as simple couriers that carry instructions from DNA to proteins, were actually making their own impacts on protein production. This is done by a reversible chemical reaction called methylation; He’s key breakthrough was showing that this methylation was reversible. It wasn’t a one-time, one-way transaction; it could be erased and reversed.

“That discovery launched us into a modern era of RNA modification research, which has really exploded in the last few years,” said He. “This is how so much of gene expression is critically affected. It impacts a wide range of biological processes—learning and memory, circadian rhythms, even something so fundamental as how a cell differentiates itself into, say, a blood cell versus a neuron.”

[…]

they began to see that messenger RNA methylation could not fully explain everything they observed.

This was mirrored in other experiments. “The data coming out of the community was saying there’s something else out there, something extremely important that we’re missing—that critically impacts many early development events, as well as human diseases such as cancer,” he said.

He’s team discovered that a group of RNAs called chromosome-associated regulatory RNAs, or carRNAs, was using the same methylation process, but these RNAs do not code proteins and are not directly involved in translation. Instead, they controlled how DNA itself was stored and transcribed.

“This has major implications in basic biology,” He said. “It directly affects gene transcriptions, and not just a few of them. It could induce global chromatin change and affects transcription of 6,000 genes in the cell line we studied.”

He sees major implications in biology, especially in human health—everything from identifying the genetic basis of disease to better treating patients.

“There are several biotech companies actively developing small molecule inhibitors of RNA methylation, but right now, even if we successfully develop therapies, we don’t have a full mechanical picture for what’s going on,” he said. “This provides an enormous opportunity to help guide disease indication for testing inhibitors and suggest new opportunities for pharmaceuticals.”

Source: Surprise discovery shakes up our understanding of gene expression

Sorry to be blunt about this… Open AWS S3 storage bucket just made 30,000 potheads’ privacy go up in smoke

Personal records, including scans of ID cards and purchase details, for more than 30,000 people were exposed to the public internet from this unsecured cloud silo, we’re told. In addition to full names and pictures of customer ID cards, the 85,000 file collection is said to include email and mailing address, phone numbers, dates of birth, and the maximum amount of cannabis an individual is allowed to purchase. All available to download, unencrypted, if you knew where to look.

Because many US states have strict record-keeping requirements written into their marijuana legalization laws, dispensaries have to manage a certain amount of customer and inventory information. In the case of THSuite, those records were put into an S3 bucket that was left accessible to the open internet – including the Shodan.io search engine.

The bucket was taken offline last week after it was discovered on December 24, and its insecure configuration was reported to THSuite on December 26 and Amazon on January 7, according to vpnMentor. The S3 bucket’s data belonged to dispensaries in Maryland, Ohio, and Colorado, we’re told.

Source: Sorry to be blunt about this… Open AWS S3 storage bucket just made 30,000 potheads’ privacy go up in smoke • The Register

These VIPs May Want to Make Sure Mohammed bin Salman Didn’t Hack Them

In early 2018, Saudi Crown Prince Mohammed bin Salman took a sweeping tour of the U.S. as part of a strategy to rebrand Saudi Arabia’s ruling monarchy as a modernizing force and pull off his “Vision 2030” plan—hobnobbing with a list of corporate execs and politicians that reads like a who’s who list of the U.S. elite.

[…]

Bezos was one of the individuals that bin Salman met with during his trip to the U.S., and at the time, Amazon was considering investments in Saudi Arabia. Those plans went south after the Khashoggi murder, but a quick scan of the crown prince’s 2018 itinerary reveals others corporate leaders and politicians eager to get into his good graces.

These people may want to have their phones examined.

According to the New York Times, the crown prince started off with a meeting in D.C. with Donald Trump and his son-in-law Jared Kushner (the latter of whom may have real reason to worry due to his WhatsApp conversations with bin Salman). Politicians who met with him include Vice President Mike Pence, then-International Monetary Fund chief Christine Lagarde, and United Nations Secretary-General António Guterres, the Guardian reported. He also met with former Senator John Kerry and former President Bill Clinton, as well as the two former President Bushes.

While touting the importance of investment in Saudi Arabian projects including Neom, bin Salman’s plans for some kind of wonder city, the crown prince met with 40 U.S. business leaders. He also met with Goldman Sachs CEO Lloyd Blankfein and former New York mayor Michael Bloomberg, a 2020 presidential candidate, in New York.

One-on-one meetings included hanging out with Microsoft CEO Satya Nadella during the Seattle wing of the crown prince’s trip, as well as Microsoft co-founder Bill Gates.

[…]

Rupert Murdoch, as well as bevy of prominent Hollywood personalities including Disney CEO Bob Iger, Universal film chairman Jeff Shell, Fox executive Peter Rice and film studio chief Stacey Snider, according to the Hollywood Reporter. Also present were Warner Bros. CEO Kevin Tsujihara, Nat Geo CEO Courtney Monroe, filmmakers James Cameron and Ridley Scott, and actors Morgan Freeman, Michael Douglas, and Dwayne “The Rock” Johnson.

During another leg of his trip in San Francisco, bin Salman met with Apple CEO Tim Cook as well as chief operating officer Jeff Williams, head of environment, policy, and social initiatives Lisa Jackson, and former retail chief Angela Ahrendts.

But to be fair, he also met Google co-founders Larry Page and Sergey Brin as well as current CEO Sundar Pichai.

[…]

ominous data analytics firm Palantir and met with its founder, venture capitalist Peter Thiel.

[…]

venture capitalists, including Andreessen Horowitz co-founder Marc Andreessen, Y Combinator chairman Sam Altman, and Sun Microsystems co-founder Vinod Khosla, according to Business Insider. Photos and the New York Times show that LinkedIn co-founder Reid Hoffman was also present.

Finally, bin Salman also met with Virgin Group founder Richard Branson and Magic Leap CEO Rony Abovitz.

During an earlier visit to the states in June 2016, bin Salman met with President Barack Obama before he traveled to San Francisco. At that time the crown prince visited Facebook and met CEO Mark Zuckerberg

[…]

At that time, the crown prince also met with Khan Academy CEO Salman Khan and then-Uber CEO Travis Kalanick,

[…]

then-SeaWorld CEO Joel Manby

Source: These VIPs May Want to Make Sure Mohammed bin Salman Didn’t Hack Them

Clearview has scraped all social media sites illegally and vs TOS, has all your pictures in a massive database (who knows how secure this is?) and a face recognition AI. Is selling access to it to cops, and who knows who else.

What if a stranger could snap your picture on the sidewalk then use an app to quickly discover your name, address and other details? A startup called Clearview AI has made that possible, and its app is currently being used by hundreds of law enforcement agencies in the US, including the FBI, says a Saturday report in The New York Times.

The app, says the Times, works by comparing a photo to a database of more than 3 billion pictures that Clearview says it’s scraped off Facebook, Venmo, YouTube and other sites. It then serves up matches, along with links to the sites where those database photos originally appeared. A name might easily be unearthed, and from there other info could be dug up online.

The size of the Clearview database dwarfs others in use by law enforcement. The FBI’s own database, which taps passport and driver’s license photos, is one of the largest, with over 641 million images of US citizens.

The Clearview app isn’t currently available to the public, but the Times says police officers and Clearview investors think it will be in the future.

The startup said in a statement Tuesday that its “technology is intended only for use by law enforcement and security personnel. It is not intended for use by the general public.”

Source: Clearview app lets strangers find your name, info with snap of a photo, report says – CNET

Using the system involves uploading photos to Clearview AI’s servers, and it’s unclear how secure these are. Although Clearview AI says its customer-support employees will not look at the photos that are uploaded, it appeared to be aware that Kashmir Hill (the Times journalist investigating the piece) was having police search for her face as part of her reporting:

While the company was dodging me, it was also monitoring me. At my request, a number of police officers had run my photo through the Clearview app. They soon received phone calls from company representatives asking if they were talking to the media — a sign that Clearview has the ability and, in this case, the appetite to monitor whom law enforcement is searching for.

The Times reports that the system appears to have gone viral with police departments, with over 600 already signed up. Although there’s been no independent verification of its accuracy, Hill says the system was able to identify photos of her even when she covered the lower half of her face, and that it managed to find photographs of her that she’d never seen before.

One expert quoted by The Times said that the amount of money involved with these systems means that they need to be banned before the abuse of them becomes more widespread. “We’ve relied on industry efforts to self-police and not embrace such a risky technology, but now those dams are breaking because there is so much money on the table,” said a professor of law and computer science at Northeastern University, Woodrow Hartzog, “I don’t see a future where we harness the benefits of face recognition technology without the crippling abuse of the surveillance that comes with it. The only way to stop it is to ban it.”

Source: The Verge

So Clearview has you, even if it violates TOS. How to stop the next guy from getting you in FB – maybe.

It should come as little surprise that any content you offer to the web for public consumption has the potential to be scraped and misused by anyone clever enough to do it. And while that doesn’t make this weekend’s report from The New York Times any less damning, it’s a great reminder about how important it is to really go through the settings for your various social networks and limit how your content is, or can be, accessed by anyone.

I won’t get too deep into the Times’ report; it’s worth reading on its own, since it involves a company (Clearview AI) scraping more than three billion images from millions of websites, including Facebook, and creating a facial-recognition app that does a pretty solid job of identifying people using images from this massive database.

Even though Clearview’s scraping techniques technically violate the terms of service on a number of websites, that hasn’t stopped the company from acquiring images en masse. And it keeps whatever it finds, which means that turning all your online data private isn’t going to help if Clearview has already scanned and grabbed your photos.

Still, something is better than nothing. On Facebook, likely the largest stash of your images, you’re going to want to visit Settings > Privacy and look for the option described: “Do you want search engines outside of Facebook to link to your profile?”

Turn that off, and Clearview won’t be able to grab your images. That’s not the setting I would have expected to use, I confess, which makes me want to go through all of my social networks and rethink how the information I share with them flows out to the greater web.

Lock down your Facebook even more with these settings

Since we’re already here, it’s worth spending a few minutes wading through Facebook’s settings and making sure as much of your content is set to friends-only as possible. That includes changing “Who can see your future posts” to “friends,” using the “Limit Past Posts” option to change everything you’ve previously posted to friends-only, and making sure that only you can see your friends list—to prevent any potential scraping and linking that some third-party might attempt. Similarly, make sure only your friends (or friends of friends) can look you up via your email address or phone number. (You never know!)

You should then visit the Timeline and Tagging settings page and make a few more changes. That includes only allowing friends to see what other people post on your timeline, as well as posts you’re tagged in. And because I’m a bit sensitive about all the crap people tag me in on Facebook, I’d turn on the “Review” options, too. That won’t help your account from being scraped, but it’s a great way to exert more control over your timeline.

Illustration for article titled Change These Facebook Settings to Protect Your Photos From Facial Recognition Software
Screenshot: David Murphy

Finally, even though it also doesn’t prevent companies from scraping your account, pull up the Public postssection of Facebook’s settings page and limit who is allowed to follow you (if you desire). You should also restrict who can comment or like your public information, like posts or other details about your life you share openly on the service.

Illustration for article titled Change These Facebook Settings to Protect Your Photos From Facial Recognition Software
Screenshot: David Murphy

Once I fix Facebook, then what?

Here’s the annoying part. Were I you, I’d take an afternoon or evening and write out all the different places I typically share snippets of my life online. For most, maybe that’s probably a handful of social services: Facebook, Instagram, Twitter, YouTube, Flickr, et cetera.

Once you’ve created your list, I’d dig deep into the settings of each service and see what options you have, if any, for limiting the availability of your content. This might run contrary to how you use the service—if you’re trying to gain lots of Instagram followers, for example, locking your profile to “private” and requiring potential followers to request access might slow your attempts to become the next big Insta-star. However, it should also prevent anyone with a crafty scraping utility to mass-download your photos (and associate them with you, either through some fancy facial-recognition tech, or by linking them to your account).

Source: Change These Facebook Settings to Protect Your Photos From Facial Recognition Software

‘I am done with open source’: Developer of Rust Actix web framework quits, appoints new maintainer

The maintainer of the Actix web framework, written in Rust, has quit the project after complaining of a toxic web community – although over 100 Actix users have since signed a letter of support for him.

Actix Web was developed by Nikolay Kim, who is also a senior software engineer at Microsoft, though the Actix project is not an official Microsoft project. Actix Web is based on Actix, a framework for Rust based on the Actor model, also developed by Kim.

The web framework is important to the Rust community partly because it addresses a common use case (development web applications) and partly because of its outstanding performance. For some tests, Acitx tops the Techempower benchmarks.

The project is open source and while it is popular, there has been some unhappiness among users about its use of “unsafe” code. In Rust, there is the concept of safe and unsafe. Safe code is protected from common bugs (and more importantly, security vulnerabilities) arising from issues like variables which point to uninitialized memory, or variables which are used after the memory allocated to them has been freed, or attempting to write data to a variable which exceeds the memory allocated. Code in Rust is safe by default, but the language also supports unsafe code, which can be useful for interoperability or to improve performance.

Actix is top of the Techempower benchmarks on some tests

Actix is top of the Techempower benchmarks on some tests

There is extensive use of unsafe code in Actix, leading to debate about what should be fixed. Kim was not always receptive to proposed changes. Most recently, developer Sergey Davidoff posted about code which “violates memory safety by handing out multiple mutable references to the same data, which can lead to, eg, a use-after-free vulnerability.”

Davidoff also stated that: “I have reported the issue to the maintainers, but they have refused to investigate it,” referring to a bug report which Kim deleted.

Debate on this matter on the Reddit Rust forum became heated and personal, the key issue being not so much the existence of real or potential vulnerabilities, but Kim’s habit of ignoring or deleting some reports. Kim decided to quit. On January 17th, he posted an “Actix project postmortem”, defending his position and complaining about the community response.

“Be[ing] a maintainer of large open source project is not a fun task. You[‘re] alway[s] face[d] with rude[ness] and hate, everyone knows better how to build software, nobody wants to do homework and read docs and think a bit and very few provide any help. … You could notice after each unsafe shitstorm, i started to spend less and less time with the community. … Nowadays supporting actix project is not fun, and be[ing] part of rust community is not fun as well. I am done with open source.”

Kim said that he did not ignore or delete issues arbitrarily, but only because he felt he had a better or more creative solution than the one proposed – while also acknowledging that the “removing issue was a stupid idea.” He also threatened to “make [Actix] repos private and then delete them.”

Over on the official Actix forum, he said he was “highly sceptical about fork viability” perhaps because, at least according to him, “no one showed any sign of project architecture understanding.”

So long, and good luck

Since then, matters have improved. The Github repository was restored and Kim said:

I realized, a lot of people depend on actix. And it would be unfair to just delete repos. I promote @JohnTitor to project leader. He did very good job helping me for the last year. I hope new community of developers emerge. And good luck!

In addition, Kim has started winning support from many community members, as evidenced by a letter with over 100 signatories thanking him and stating: “We are extremely disappointed at the level of abuse directed towards you.”

The episode demonstrates that expert developers are often not expert in managing the human relations aspect of projects that can become significant. It also shows how some contributors and users do not practice best behaviour in online interactions, forgetting the extent of the work done by volunteers and for which, it’s worth noting, they have paid nothing.

Positive recent developments may mean that Actix development continues, that bugs and security vulnerabilities are fixed, and that its community gets a better handle on how to proceed constructively. ®

Source: ‘I am done with open source’: Developer of Rust Actix web framework quits, appoints new maintainer • The Register

Netgear leaves admin interface’s TLS cert and private key router firmware

Netgear left in its router firmware key ingredients needed to intercept and tamper with secure connections to its equipment’s web-based admin interfaces.

Specifically, valid, signed TLS certificates with private keys were embedded in the software, which was available to download for free by anyone, and also shipped with Netgear devices. This data can be used to create HTTPS certs that browsers trust, and can be used in miscreant-in-the-middle attacks to eavesdrop on and alter encrypted connections to the routers’ built-in web-based control panel.

In other words, the data can be used to potentially hijack people’s routers. It’s partly an embarrassing leak, and partly indicative of manufacturers trading off security, user friendliness, cost, and effort.

Security mavens Nick Starke and Tom Pohl found the materials on January 14, and publicly disclosed their findings five days later, over the weekend.

The blunder is a result in Netgear’s approach to security and user convenience. When configuring their kit, owners of Netgear equipment are expected to visit https://routerlogin.net or https://routerlogin.com. The network’s router tries to ensure those domain names resolve to the device’s IP address on the local network. So, rather than have people enter 192.168.1.1 or similar, they can just use that memorable domain name.

To establish an HTTPS connection, and avoid complaints from browsers about using insecure HTTP and untrusted certs, the router has to produce a valid HTTPS cert for routerlogin.net or routerlogin.com that is trusted by browsers. To cryptographically prove the cert is legit when a connection is established, the router needs to use the certificate’s private key. This key is stored unsecured in the firmware, allowing anyone to extract and abuse it.

Netgear doesn’t want to provide an HTTP-only admin interface, to avoid warnings from browsers of insecure connections and to thwart network eavesdroppers, we presume. But if it uses HTTPS, the built-in web server needs to prove its cert is legit, and thus needs its private key. So either Netgear switches to using per-device private-public keys, or stores the private key in a secure HSM in the router, or just uses HTTP, or it has to come up with some other solution. You can follow that debate here.

Source: Leave your admin interface’s TLS cert and private key in your router firmware in 2020? Just Netgear things • The Register

Immune cell which kills most cancers discovered by accident by Welsh scientists in major breakthrough 

A new type of immune cell which kills most cancers has been discovered by accident by British scientists, in a finding which could herald a major breakthrough in treatment.

Researchers at Cardiff University were analysing blood from a bank in Wales, looking for immune cells that could fight bacteria, when they found an entirely new type of T-cell.

That new immune cell carries a never-before-seen receptor which acts like a grappling hook, latching on to most human cancers, while ignoring healthy cells.

In laboratory studies, immune cells equipped with the new receptor were shown to kill lung, skin, blood, colon, breast, bone, prostate, ovarian, kidney and cervical cancer.

Professor Andrew Sewell, lead author on the study and an expert in T-cells from Cardiff University’s School of Medicine, said it was “highly unusual” to find a cell that had broad cancer-fighting therapies, and raised the prospect of a universal therapy.

“This was a serendipitous finding, nobody knew this cell existed,” Prof Sewell told The Telegraph.

“Our finding raises the prospect of a ‘one-size-fits-all’ cancer treatment, a single type of T-cell that could be capable of destroying many different types of cancers across the population. Previously nobody believed this could be possible.”

[…]

the new cell attaches to a molecule on cancer cells called MR1, which does not vary in humans.

It means that not only would the treatment work for most cancers, but it could be shared between people, raising the possibility that banks of the special immune cells could be created for instant ‘off-the-shelf’ treatment in future.

When researchers injected the new immune cells into mice bearing human cancer and with a human immune system, they found ‘encouraging’ cancer-clearing results.

And they showed that T-cells of skin cancer patients, which were modified to express the new receptor, could destroy not only the patient’s own cancer cells, but also other patients’ cancer cells in the laboratory.

[…]

Professor Awen Gallimore, of the University’s division of infection and immunity and cancer immunology lead for the Wales Cancer Research Centre, added: “If this transformative new finding holds up, it will lay the foundation for a ‘universal’ T-cell medicine, mitigating against the tremendous costs associated with the identification, generation and manufacture of personalised T-cells.

“This is truly exciting and potentially a great step forward for the accessibility of cancer immunotherapy.”

Commenting on the study, Daniel Davis, Professor of Immunology at the University of Manchester, said it was an exciting discovery which opened the door to cellular therapies being used for more people.

“We are in the midst of a medical revolution harnessing the power of the immune system to tackle cancer.  But not everyone responds to the current therapies and there can be harmful side-effects.

“The team have convincingly shown that, in a lab dish, this type of immune cell reacts against a range of different cancer cells.

“We still need to understand exactly how it recognises and kills cancer cells, while not responding to normal healthy cells.”

The research was published in the journal Nature Immunology.

Source: Immune cell which kills most cancers discovered by accident by British scientists in major breakthrough 

Local water availability is permanently reduced after planting forests

River flow is reduced in areas where forests have been planted and does not recover over time, a new study has shown. Rivers in some regions can completely disappear within a decade. This highlights the need to consider the impact on regional water availability, as well as the wider climate benefit, of tree-planting plans.

“Reforestation is an important part of tackling , but we need to carefully consider the best places for it. In some places, changes to water availability will completely change the local cost-benefits of tree-planting programmes,” said Laura Bentley, a plant scientist in the University of Cambridge Conservation Research Institute, and first author of the report.

Planting large areas of has been suggested as one of the best ways of reducing atmospheric carbon dioxide levels, since trees absorb and store this greenhouse gas as they grow. While it has long been known that planting trees reduces the amount of water flowing into nearby rivers, there has previously been no understanding of how this effect changes as forests age.

The study looked at 43 sites across the world where forests have been established, and used as a measure of water availability in the region. It found that within five years of planting trees, river flow had reduced by an average of 25%. By 25 years, rivers had gone down by an average of 40% and in a few cases had dried up entirely. The biggest percentage reductions in water availability were in regions in Australia and South Africa.

“River flow does not recover after planting trees, even after many years, once disturbances in the catchment and the effects of climate are accounted for,” said Professor David Coomes, Director of the University of Cambridge Conservation Research Institute, who led the study.

Published in the journal Global Change Biology, the research showed that the type of land where trees are planted determines the degree of impact they have on local water availability. Trees planted on natural grassland where the soil is healthy decrease river flow significantly. On land previously degraded by agriculture, establishing forest helps to repair the soil so it can hold more water and decreases nearby river flow by a lesser amount.

Counterintuitively, the effect of trees on river flow is smaller in drier years than wetter ones. When trees are drought-stressed they close the pores on their leaves to conserve water, and as a result draw up less water from the soil. In the trees use more water from the soil, and also catch the rainwater in their leaves.

“Climate change will affect availability around the world,” said Bentley. “By studying how forestation affects , we can work to minimise any local consequences for people and the environment.”

Source: Local water availability is permanently reduced after planting forests

Ultrafast camera takes 1 trillion frames per second of transparent objects and phenomena, can photograph light pulses

A little over a year ago, Caltech’s Lihong Wang developed the world’s fastest camera, a device capable of taking 10 trillion pictures per second. It is so fast that it can even capture light traveling in slow motion.

But sometimes just being quick is not enough. Indeed, not even the fastest camera can take pictures of things it cannot see. To that end, Wang, Bren Professor of Medical Engineering and Electrical Engineering, has developed a that can take up to 1 trillion pictures per second of transparent objects. A paper about the camera appears in the January 17 issue of the journal Science Advances.

The technology, which Wang calls phase-sensitive compressed ultrafast photography (pCUP), can take video not just of transparent objects but also of more ephemeral things like shockwaves and possibly even of the signals that travel through neurons.

Wang explains that his new imaging system combines the high-speed photography system he previously developed with an old technology, phase-contrast microscopy, that was designed to allow better imaging of objects that are mostly transparent such as cells, which are mostly water.

[…]

Wang says the technology, though still early in its development, may ultimately have uses in many fields, including physics, biology, or chemistry.

“As signals travel through neurons, there is a minute dilation of nerve fibers that we hope to see. If we have a network of neurons, maybe we can see their communication in real time,” Wang says. In addition, he says, because temperature is known to change phase contrast, the system “may be able to image how a flame front spreads in a combustion chamber.”

The paper describing pCUP is titled “Picosecond-resolution phase-sensitive imaging of transparent objects in a single shot.”

Source: Ultrafast camera takes 1 trillion frames per second of transparent objects and phenomena

HP Remotely Disables a Customer’s Printer Until He Joins Company’s Monthly Subscription Service

A Twitter user’s complaint last week in which he produces photo evidence of HP warning him that his ink cartridges would be disabled until he starts paying for HP Instant Ink monthly subscription service has gone viral on the social media.

Ryan Sullivan, the user who made the complaint, said he only discovered the warning after cancelling a random HP subscription — which charged him $4.99 a month — after “over a year” of the billing cycle. “Cartridge cannot be used until printer is enrolled in HP Instant Ink,” Sullivan was informed by an error message.

Source: HP Remotely Disables a Customer’s Printer Until He Joins Company’s Monthly Subscription Service – Slashdot