‘I am done with open source’: Developer of Rust Actix web framework quits, appoints new maintainer

The maintainer of the Actix web framework, written in Rust, has quit the project after complaining of a toxic web community – although over 100 Actix users have since signed a letter of support for him.

Actix Web was developed by Nikolay Kim, who is also a senior software engineer at Microsoft, though the Actix project is not an official Microsoft project. Actix Web is based on Actix, a framework for Rust based on the Actor model, also developed by Kim.

The web framework is important to the Rust community partly because it addresses a common use case (development web applications) and partly because of its outstanding performance. For some tests, Acitx tops the Techempower benchmarks.

The project is open source and while it is popular, there has been some unhappiness among users about its use of “unsafe” code. In Rust, there is the concept of safe and unsafe. Safe code is protected from common bugs (and more importantly, security vulnerabilities) arising from issues like variables which point to uninitialized memory, or variables which are used after the memory allocated to them has been freed, or attempting to write data to a variable which exceeds the memory allocated. Code in Rust is safe by default, but the language also supports unsafe code, which can be useful for interoperability or to improve performance.

Actix is top of the Techempower benchmarks on some tests

Actix is top of the Techempower benchmarks on some tests

There is extensive use of unsafe code in Actix, leading to debate about what should be fixed. Kim was not always receptive to proposed changes. Most recently, developer Sergey Davidoff posted about code which “violates memory safety by handing out multiple mutable references to the same data, which can lead to, eg, a use-after-free vulnerability.”

Davidoff also stated that: “I have reported the issue to the maintainers, but they have refused to investigate it,” referring to a bug report which Kim deleted.

Debate on this matter on the Reddit Rust forum became heated and personal, the key issue being not so much the existence of real or potential vulnerabilities, but Kim’s habit of ignoring or deleting some reports. Kim decided to quit. On January 17th, he posted an “Actix project postmortem”, defending his position and complaining about the community response.

“Be[ing] a maintainer of large open source project is not a fun task. You[‘re] alway[s] face[d] with rude[ness] and hate, everyone knows better how to build software, nobody wants to do homework and read docs and think a bit and very few provide any help. … You could notice after each unsafe shitstorm, i started to spend less and less time with the community. … Nowadays supporting actix project is not fun, and be[ing] part of rust community is not fun as well. I am done with open source.”

Kim said that he did not ignore or delete issues arbitrarily, but only because he felt he had a better or more creative solution than the one proposed – while also acknowledging that the “removing issue was a stupid idea.” He also threatened to “make [Actix] repos private and then delete them.”

Over on the official Actix forum, he said he was “highly sceptical about fork viability” perhaps because, at least according to him, “no one showed any sign of project architecture understanding.”

So long, and good luck

Since then, matters have improved. The Github repository was restored and Kim said:

I realized, a lot of people depend on actix. And it would be unfair to just delete repos. I promote @JohnTitor to project leader. He did very good job helping me for the last year. I hope new community of developers emerge. And good luck!

In addition, Kim has started winning support from many community members, as evidenced by a letter with over 100 signatories thanking him and stating: “We are extremely disappointed at the level of abuse directed towards you.”

The episode demonstrates that expert developers are often not expert in managing the human relations aspect of projects that can become significant. It also shows how some contributors and users do not practice best behaviour in online interactions, forgetting the extent of the work done by volunteers and for which, it’s worth noting, they have paid nothing.

Positive recent developments may mean that Actix development continues, that bugs and security vulnerabilities are fixed, and that its community gets a better handle on how to proceed constructively. ®

Source: ‘I am done with open source’: Developer of Rust Actix web framework quits, appoints new maintainer • The Register

Netgear leaves admin interface’s TLS cert and private key router firmware

Netgear left in its router firmware key ingredients needed to intercept and tamper with secure connections to its equipment’s web-based admin interfaces.

Specifically, valid, signed TLS certificates with private keys were embedded in the software, which was available to download for free by anyone, and also shipped with Netgear devices. This data can be used to create HTTPS certs that browsers trust, and can be used in miscreant-in-the-middle attacks to eavesdrop on and alter encrypted connections to the routers’ built-in web-based control panel.

In other words, the data can be used to potentially hijack people’s routers. It’s partly an embarrassing leak, and partly indicative of manufacturers trading off security, user friendliness, cost, and effort.

Security mavens Nick Starke and Tom Pohl found the materials on January 14, and publicly disclosed their findings five days later, over the weekend.

The blunder is a result in Netgear’s approach to security and user convenience. When configuring their kit, owners of Netgear equipment are expected to visit https://routerlogin.net or https://routerlogin.com. The network’s router tries to ensure those domain names resolve to the device’s IP address on the local network. So, rather than have people enter 192.168.1.1 or similar, they can just use that memorable domain name.

To establish an HTTPS connection, and avoid complaints from browsers about using insecure HTTP and untrusted certs, the router has to produce a valid HTTPS cert for routerlogin.net or routerlogin.com that is trusted by browsers. To cryptographically prove the cert is legit when a connection is established, the router needs to use the certificate’s private key. This key is stored unsecured in the firmware, allowing anyone to extract and abuse it.

Netgear doesn’t want to provide an HTTP-only admin interface, to avoid warnings from browsers of insecure connections and to thwart network eavesdroppers, we presume. But if it uses HTTPS, the built-in web server needs to prove its cert is legit, and thus needs its private key. So either Netgear switches to using per-device private-public keys, or stores the private key in a secure HSM in the router, or just uses HTTP, or it has to come up with some other solution. You can follow that debate here.

Source: Leave your admin interface’s TLS cert and private key in your router firmware in 2020? Just Netgear things • The Register

Immune cell which kills most cancers discovered by accident by Welsh scientists in major breakthrough 

A new type of immune cell which kills most cancers has been discovered by accident by British scientists, in a finding which could herald a major breakthrough in treatment.

Researchers at Cardiff University were analysing blood from a bank in Wales, looking for immune cells that could fight bacteria, when they found an entirely new type of T-cell.

That new immune cell carries a never-before-seen receptor which acts like a grappling hook, latching on to most human cancers, while ignoring healthy cells.

In laboratory studies, immune cells equipped with the new receptor were shown to kill lung, skin, blood, colon, breast, bone, prostate, ovarian, kidney and cervical cancer.

Professor Andrew Sewell, lead author on the study and an expert in T-cells from Cardiff University’s School of Medicine, said it was “highly unusual” to find a cell that had broad cancer-fighting therapies, and raised the prospect of a universal therapy.

“This was a serendipitous finding, nobody knew this cell existed,” Prof Sewell told The Telegraph.

“Our finding raises the prospect of a ‘one-size-fits-all’ cancer treatment, a single type of T-cell that could be capable of destroying many different types of cancers across the population. Previously nobody believed this could be possible.”

[…]

the new cell attaches to a molecule on cancer cells called MR1, which does not vary in humans.

It means that not only would the treatment work for most cancers, but it could be shared between people, raising the possibility that banks of the special immune cells could be created for instant ‘off-the-shelf’ treatment in future.

When researchers injected the new immune cells into mice bearing human cancer and with a human immune system, they found ‘encouraging’ cancer-clearing results.

And they showed that T-cells of skin cancer patients, which were modified to express the new receptor, could destroy not only the patient’s own cancer cells, but also other patients’ cancer cells in the laboratory.

[…]

Professor Awen Gallimore, of the University’s division of infection and immunity and cancer immunology lead for the Wales Cancer Research Centre, added: “If this transformative new finding holds up, it will lay the foundation for a ‘universal’ T-cell medicine, mitigating against the tremendous costs associated with the identification, generation and manufacture of personalised T-cells.

“This is truly exciting and potentially a great step forward for the accessibility of cancer immunotherapy.”

Commenting on the study, Daniel Davis, Professor of Immunology at the University of Manchester, said it was an exciting discovery which opened the door to cellular therapies being used for more people.

“We are in the midst of a medical revolution harnessing the power of the immune system to tackle cancer.  But not everyone responds to the current therapies and there can be harmful side-effects.

“The team have convincingly shown that, in a lab dish, this type of immune cell reacts against a range of different cancer cells.

“We still need to understand exactly how it recognises and kills cancer cells, while not responding to normal healthy cells.”

The research was published in the journal Nature Immunology.

Source: Immune cell which kills most cancers discovered by accident by British scientists in major breakthrough 

Local water availability is permanently reduced after planting forests

River flow is reduced in areas where forests have been planted and does not recover over time, a new study has shown. Rivers in some regions can completely disappear within a decade. This highlights the need to consider the impact on regional water availability, as well as the wider climate benefit, of tree-planting plans.

“Reforestation is an important part of tackling , but we need to carefully consider the best places for it. In some places, changes to water availability will completely change the local cost-benefits of tree-planting programmes,” said Laura Bentley, a plant scientist in the University of Cambridge Conservation Research Institute, and first author of the report.

Planting large areas of has been suggested as one of the best ways of reducing atmospheric carbon dioxide levels, since trees absorb and store this greenhouse gas as they grow. While it has long been known that planting trees reduces the amount of water flowing into nearby rivers, there has previously been no understanding of how this effect changes as forests age.

The study looked at 43 sites across the world where forests have been established, and used as a measure of water availability in the region. It found that within five years of planting trees, river flow had reduced by an average of 25%. By 25 years, rivers had gone down by an average of 40% and in a few cases had dried up entirely. The biggest percentage reductions in water availability were in regions in Australia and South Africa.

“River flow does not recover after planting trees, even after many years, once disturbances in the catchment and the effects of climate are accounted for,” said Professor David Coomes, Director of the University of Cambridge Conservation Research Institute, who led the study.

Published in the journal Global Change Biology, the research showed that the type of land where trees are planted determines the degree of impact they have on local water availability. Trees planted on natural grassland where the soil is healthy decrease river flow significantly. On land previously degraded by agriculture, establishing forest helps to repair the soil so it can hold more water and decreases nearby river flow by a lesser amount.

Counterintuitively, the effect of trees on river flow is smaller in drier years than wetter ones. When trees are drought-stressed they close the pores on their leaves to conserve water, and as a result draw up less water from the soil. In the trees use more water from the soil, and also catch the rainwater in their leaves.

“Climate change will affect availability around the world,” said Bentley. “By studying how forestation affects , we can work to minimise any local consequences for people and the environment.”

Source: Local water availability is permanently reduced after planting forests

Ultrafast camera takes 1 trillion frames per second of transparent objects and phenomena, can photograph light pulses

A little over a year ago, Caltech’s Lihong Wang developed the world’s fastest camera, a device capable of taking 10 trillion pictures per second. It is so fast that it can even capture light traveling in slow motion.

But sometimes just being quick is not enough. Indeed, not even the fastest camera can take pictures of things it cannot see. To that end, Wang, Bren Professor of Medical Engineering and Electrical Engineering, has developed a that can take up to 1 trillion pictures per second of transparent objects. A paper about the camera appears in the January 17 issue of the journal Science Advances.

The technology, which Wang calls phase-sensitive compressed ultrafast photography (pCUP), can take video not just of transparent objects but also of more ephemeral things like shockwaves and possibly even of the signals that travel through neurons.

Wang explains that his new imaging system combines the high-speed photography system he previously developed with an old technology, phase-contrast microscopy, that was designed to allow better imaging of objects that are mostly transparent such as cells, which are mostly water.

[…]

Wang says the technology, though still early in its development, may ultimately have uses in many fields, including physics, biology, or chemistry.

“As signals travel through neurons, there is a minute dilation of nerve fibers that we hope to see. If we have a network of neurons, maybe we can see their communication in real time,” Wang says. In addition, he says, because temperature is known to change phase contrast, the system “may be able to image how a flame front spreads in a combustion chamber.”

The paper describing pCUP is titled “Picosecond-resolution phase-sensitive imaging of transparent objects in a single shot.”

Source: Ultrafast camera takes 1 trillion frames per second of transparent objects and phenomena

HP Remotely Disables a Customer’s Printer Until He Joins Company’s Monthly Subscription Service

A Twitter user’s complaint last week in which he produces photo evidence of HP warning him that his ink cartridges would be disabled until he starts paying for HP Instant Ink monthly subscription service has gone viral on the social media.

Ryan Sullivan, the user who made the complaint, said he only discovered the warning after cancelling a random HP subscription — which charged him $4.99 a month — after “over a year” of the billing cycle. “Cartridge cannot be used until printer is enrolled in HP Instant Ink,” Sullivan was informed by an error message.

Source: HP Remotely Disables a Customer’s Printer Until He Joins Company’s Monthly Subscription Service – Slashdot

Opera reportedly has multiple predatory loan apps in the Play Store with interest rates of up to 876%

It’s no secret that Opera isn’t doing so well in the era of Chrome dominance. According to a report published by Hindenburg Research, the company’s losses in browser revenue have apparently led it to create multiple loan apps with short payment windows and interest rates of ~365-876%, which are in violation of new Play Store rules Google enacted last year.

You may recall that Opera became a public company in mid-2017, shortly after it was purchased by a China-based investor group. Since then, Opera’s market share has continued to fall, due to the increasing dominance of Chrome. As a result, Opera decided to pivot to predatory short-term lending in Africa and Asia across four apps: OKash and OPesa in Kenya, CashBean in India, and OPay in Nigeria.

The apps have apparently remained available in the Play Store (except OPesa, which seems to be gone) by advertising different loan rates in the app description than users actually receive. For example, the listing for OKash stated its loans range from 91-365 days (the page now says 61-365 days), but an email response from the company stated it only offered loans from 15-29 days — significantly lower than the 60-day minimum enforced by Google. All of Opera’s other apps were also found to be in violation to varying extents.

If you think that’s bad, then buckle in! According to Play Store reviews, the OKash and OPesa apps sent text messages or calls to people in the user’s contacts when payments were late, threatening to take legal action or place the borrower on a credit blacklist. A former employee told Hindenburg Research that this practice ended last year “because it was said it was illegal.” That’s probably a good reason to stop doing something, right?

Play Store reviews on OKash

Unfortunately for Opera, scamming low-income people isn’t helping the company’s financial situation. With all apps in violation of Play Store policies (and one already removed from the store), Opera’s primary means of income could very well disappear, and Hindenburg Research found evidence of investor money possibly being redirected to other companies and people:

1. $9.5 million of cash went toward an entity that appears to have been owned 100% by Opera’s Chairman/CEO, despite company disclosures suggesting otherwise. Ostensibly, the reason for the payment was to ‘purchase’ a business that was already funded and operated by Opera. To us, this transaction simply looks like a cash withdrawal.

2. $30 million of cash went into a karaoke app business owned by Opera’s Chairman/CEO, days before the arrest of a key business partner.

3. $31+ million of cash was doled out for “marketing expenses and prepayments” to an antivirus software company controlled by an Opera director and influenced by Opera’s Chairman/CEO. The antivirus company has no other known marketing clients, but is paid to help Opera with Google and Facebook ads and other marketing services. (Note: Most firms use a marketing agency for help with marketing needs.)

Since the report was released on January 16th, Opera’s stock price has dropped from ~$9 to $7.15 after hours (as of the time of writing).

You can read the full report at the link below. In the meantime, it might be a good idea to uninstall any Opera-owned apps — they might start sending texts to your friends about your browsing habits.

Source: Opera reportedly has multiple predatory loan apps in the Play Store with interest rates of up to 876%

BlackVue dashcam shows anyone everywhere you are in real time and where you have been in the past

An app that is supposed to be a fun activity for dashcam users to broadcast their camera feeds and drives is actually allowing people to scrape and store the real-time location of drivers across the world.

BlackVue is a dashcam company with its own social network. With a small, internet-connected dashcam installed inside their vehicle, BlackVue users can receive alerts when their camera detects an unusual event such as someone colliding with their parked car. Customers can also allow others to tune into their camera’s feed, letting others “vicariously experience the excitement and pleasure of driving all over the world,” a message displayed inside the app reads.

Users are invited to upload footage of their BlackVue camera spotting people crashing into their cars or other mishaps with the #CaughtOnBlackVue hashtag. It’s kind of like Amazon’s Ring cameras, but for cars. BlackVue exhibited at CES earlier this month, and was previously featured on Innovations with Ed Begley Jr. on the History Channel.

But what BlackVue’s app doesn’t make clear is that it is possible to pull and store users’ GPS locations in real-time over days or even weeks. Motherboard was able to track the movements of some of BlackVue’s customers in the United States.

The news highlights privacy issues that some BlackVue customers or other dashcam users may not be aware of, and more generally the potential dangers of adding an internet and GPS enabled device into your vehicle. It also shows how developers may have one use case for an app, while people can discover others: although BlackVue wanted to create an entertaining app where users could tap into each others’ feeds, they may not have realized that it would be trivially easy to track its customers’ movements in granular detail, at scale, and over time.

BlackVue acts as another example of how surveillance products that are nominally intended to protect a user have been designed in such a way that can end up in a user being spied on, too.

“I don’t think people understand the risk,” Lee Heath, an information security professional and BlackVue user told Motherboard. “I knew about some of the cloud features which I wanted. You can have it automatically connect and upload when events happen. But I had no idea about the sharing” before receiving the device as a gift, he added.

Ordinarily, BlackVue lets anyone create an account and then view a map of cameras that are broadcasting their location and live feed. This broadcasting is not enabled by default, and users have to select the option to do so when setting up or configuring their own camera. Motherboard tuned into live feeds from users in Hong Kong, China, Russia, the U.K, Germany, and elsewhere. BlackVue spokesperson Jeremie Sinic told Motherboard in an email that the users on the map only represent a tiny fraction of BlackVue’s overall customers.

But the actual GPS data that drives the map is available and publicly accessible.

1579127170434-blackvue-user-gps
A screenshot of the location data of one BlackVue user that Motherboard tracked throughout New York. Motherboard has heavily obfuscated the data to protect the individual’s privacy. Image: Motherboard

By reverse engineering the iOS version of the BlackVue app, Motherboard was able to write scripts that pull the GPS location of BlackVue users over a week long period and store the coordinates and other information like the user’s unique identifier. One script could collect the location data of every BlackVue user who had mapping enabled on the eastern half of the United States every two minutes. Motherboard collected data on dozens of customers.

With that data, we were able to build a picture of several BlackVue users’ daily routines: one drove around Manhattan during the day, perhaps as a rideshare driver, before then leaving for Queens in the evening. Another BlackVue user regularly drove around Brooklyn, before parking on a specific block in Queens overnight. The user did this for several different nights, suggesting this may be where the owner lives or stores their vehicle. A third showed someone driving a truck all over South Carolina.

Some customers may use BlackVue as part of a fleet of vehicles; an employer wanting to keep tabs on their delivery trucks as they drive around, for instance. But BlackVue also markets its products to ordinary consumers who want to protect their cars.

1579127955288-blackvue-live-feed
A screenshot of Motherboard accessing someone’s public live feed as the user is driving in public away from their apparent home. Motherboard has redacted the user information to protect individual privacy. Image: Motherboard

BlackVue’s Sinic said that collecting GPS coordinates of multiple users over an extended period of time is not supposed to be possible.

“Our developers have updated the security measures following your report from yesterday that I forwarded,” Sinic said. After this, several of Motherboard’s web requests that previously provided user data stopped working.

In 2018 the company did make some privacy-related changes to its app, meaning users were not broadcasting their camera feeds by default.

“I think BlackVue has decent ideas as far as leaving off by default but allows people to put themselves at risk without understanding,” Heath, the BlackVue user, said.

Motherboard has deleted all of the data collected to preserve individuals’ privacy.

Source: This App Lets Us See Everywhere People Drive – VICE

PopSockets CEO calls out Amazon’s ‘bullying with a smile’ tactics, shows how monopolies are bad for competition

Amazon has a “bullying” problem.

So insisted PopSockets CEO and inventor David Barnett today while describing his company’s relationship with the e-commerce and logistics giant. Barnett was addressing members of the House Subcommittee on Antitrust, Commercial, and Administrative Law and, over the course of the hearing, laid out how the Jeff Bezos-helmed corporate behemoth had pressured his smartphone accessory company in a manner best described as incredibly shady.

Barnett was joined by executives from Sonos, Basecamp, and Tile, who all took turns airing a list of grievances against major tech players such as Amazon, Apple, Facebook and Google. They all recounted, in manners specific to their respective companies, how the major tech players have used their market dominance to squeeze smaller competitors in allegedly anticompetitive ways.

The CEO of PopSockets, however, appeared to have a personal beef with Jeff Bezos (which he pronounced “Bey-zoo”).

“Multiple times we discovered that Amazon itself had sourced counterfeit product and was selling it alongside our own product,” he noted.

Barnett, under oath, told the gathered members of the House that Amazon initially played nice only to drop the hammer when it believed no one was watching. After agreeing to a written contract stipulating a price at which PopSockets would be sold on Amazon, the e-commerce giant would then allegedly unilaterally lower the price and demand that PopSockets make up the difference.

Colorado Congressman Ed Perlmutter asked Barnett how Amazon could “ignore the contract that [PopSockets] entered into and just say, ‘Sorry, that was our contract, but you got to lower your price.'”

Barnett didn’t mince words.

“With coercive tactics, basically,” he replied. “And these are tactics that are mainly executed by phone. It’s one of the strangest relationships I’ve ever had with a retailer.”

Barnett emphasized that, on paper, the contract “appears to be negotiated in good faith.”

However, he claimed, this is followed by “… frequent phone calls. And on the phone calls we get what I might call bullying with a smile. Very friendly people that we deal with who say, ‘By the way, we dropped the price of X product last week. We need you to pay for it.'”

Barnett said he would push back and that’s when “the threats come.”

He asserted that Amazon representatives would tell him over the phone: “If we don’t get it, then we’re going to source product from the gray market.”

In other words, as with so many things Amazon, it’s either play ball or get bent according to Barnett.

An Amazon spokesperson reached for comment, unsurprisingly, framed the issue differently.

“We sought to continue working with PopSockets as a vendor to ensure that we could provide competitive prices, availability, broad selection and fast delivery for those products to our customers,” read the statement in part. “Like any brand, however, PopSockets is free to choose which retailers it supplies and chose to stop selling directly through Amazon.”

Essentially, in Amazon’s view, PopSockets chose to get bent. We should all be so lucky to be offered such a choice.

Source: PopSockets CEO calls out Amazon’s ‘bullying with a smile’ tactics

PGP keys, software security, and much more threatened by new SHA1 exploit

Three years ago, Ars declared the SHA1 cryptographic hash algorithm officially dead after researchers performed the world’s first known instance of a fatal exploit known as a “collision” on it. On Tuesday, the dead SHA1 horse got clobbered again as a different team of researchers unveiled a new attack that’s significantly more powerful.

The new collision gives attackers more options and flexibility than were available with the previous technique. It makes it practical to create PGP encryption keys that, when digitally signed using SHA1 algorithm, impersonate a chosen target. More generally, it produces the same hash for two or more attacker-chosen inputs by appending data to each of them. The attack unveiled on Tuesday also costs as little as $45,000 to carry out. The attack disclosed in 2017, by contrast, didn’t allow forgeries on specific predetermined document prefixes and was evaluated to cost from $110,000 to $560,000 on Amazon’s Web Services platform, depending on how quickly adversaries wanted to carry it out.

The new attack is significant. While SHA1 has been slowly phased out over the past five years, it remains far from being fully deprecated. It’s still the default hash function for certifying PGP keys in the legacy 1.4 version branch of GnuPG, the open-source successor to PGP application for encrypting email and files. Those SHA1-generated signatures were accepted by the modern GnuPG branch until recently, and were only rejected after the researchers behind the new collision privately reported their results.

Git, the world’s most widely used system for managing software development among multiple people, still relies on SHA1 to ensure data integrity. And many non-Web applications that rely on HTTPS encryption still accept SHA1 certificates. SHA1 is also still allowed for in-protocol signatures in the Transport Layer Security and Secure Shell protocols.

In a paper presented at this week’s Real World Crypto Symposium in New York City, the researchers warned that even if SHA1 usage is low or used only for backward compatibility, it will leave users open to the threat of attacks that downgrade encrypted connections to the broken hash function. The researchers said their results underscore the importance of fully phasing out SHA1 across the board as soon as possible.

“This work shows once and for all that SHA1 should not be used in any security protocol where some kind of collision resistance is to be expected from the hash function,” the researchers wrote. “Continued usage of SHA1 for certificates or for authentication of handshake messages in TLS or SSH is dangerous, and there is a concrete risk of abuse by a well-motivated adversary. SHA1 has been broken since 2004, but it is still used in many security systems; we strongly advise users to remove SHA1 support to avoid downgrade attacks.”

Source: PGP keys, software security, and much more threatened by new SHA1 exploit | Ars Technica

More than 600 million users installed Android ‘fleeceware’ apps from the Play Store – where they don’t cancel your trial after uninstalling

Security researchers from Sophos say they’ve discovered a new set of “fleeceware” apps that appear to have been downloaded and installed by more than 600 million Android users.

The term fleeceware is a recent addition to the cyber-security jargon. It was coined by UK cyber-security firm Sophos last September following an investigation that discovered a new type of financial fraud on the official Google Play Store.

It refers to apps that abuse the ability for Android apps to run trial periods before a payment is charged to the user’s account.

By default, all users who sign up for an Android app trial period, have to cancel the trial period manually to avoid being charged. However, most users just uninstall an app when they don’t like it.

The vast majority of app developers interpret this action — a user uninstalling their app — as a trial period cancelation and don’t follow through with a charge.

But last year, Sophos discovered that some Android app developers didn’t cancel an app’s trial period once the app is uninstalled and they don’t receive a specific request from the user.

Sophos said it initially discovered 24 Android apps that were charging obscene fees (between $100 and $240 per year) for the most basic and simplistic apps, such as QR/barcode readers and calculators.

Sophos researchers called these apps “fleeceware.”

In a new report published yesterday, Sophos said it discovered another set of Android “fleeceware” apps that have continued to abuse the app trial mechanism to impose charges to users after they uninstalled an app.

Source: More than 600 million users installed Android ‘fleeceware’ apps from the Play Store | ZDNet

Mozilla (Firefox) lays off 70 as it waits for new products to generate revenue

In an internal memo, Mozilla chairwoman and interim CEO Mitchell Baker specifically mentions the slow rollout of the organization’s new revenue-generating products as the reason for why it needed to take this action. The overall number may still be higher, though, as Mozilla is still looking into how this decision will affect workers in the U.K. and France. In 2018, Mozilla Corporation (as opposed to the much smaller Mozilla Foundation) said it had about 1,000 employees worldwide.

“You may recall that we expected to be earning revenue in 2019 and 2020 from new subscription products as well as higher revenue from sources outside of search. This did not happen,” Baker writes in her memo. “Our 2019 plan underestimated how long it would take to build and ship new, revenue-generating products. Given that, and all we learned in 2019 about the pace of innovation, we decided to take a more conservative approach to projecting our revenue for 2020. We also agreed to a principle of living within our means, of not spending more than we earn for the foreseeable future.”

Source: Mozilla lays off 70 as it waits for new products to generate revenue | TechCrunch

Time to donate!

Apple’s latest AI acquisition leaves some Wyze cameras without people detection

Earlier today, Apple confirmed it purchased Seattle-based AI company Xnor.ai (via MacRumors). Acquisitions at Apple’s scale happen frequently, though rarely do they impact everyday people on the day of their announcement. This one is different.

Cameras from fellow Seattle-based company Wyze, including the Wyze Cam V2 and Wyze Cam Pan, have utilized Xnor.ai’s on-device people detection since last summer. But now that Apple owns the company, it’s no longer available. Some people on Wyze’s forum are noting that the beta firmware removing the people detection has already started to roll out.

Oddly enough, word of this lapse in service isn’t anything new. Wyze issued a statement in November 2019 saying that Xnor.ai had terminated their contract (though its reason for doing so wasn’t as clear then as it is today), and that a firmware update slated for mid-January 2020 would remove the feature from those cameras.

There’s a bright side to this loss, though, even if Apple snapping up Xnor.ai makes Wyze’s affordable cameras less appealing in the interim. Wyze says that it’s working on its own in-house version of people detection for launch at some point this year. And whether it operates on-device via “edge AI” computing like Xnor.ai’s does, or by authenticating through the cloud, it will be free for users when it launches.

That’s good and all, but the year just started, and it’s a little worrying Wyze hasn’t followed up with a specific time frame for its replacement of the feature. Two days ago, Wyze’s social media community manager stated that the company was “making great progress” on its forums, but they didn’t offer up when it would be available.

As for what Apple plans to do with Xnor.ai is anyone’s guess. Ahead of its partnership with Wyze, the AI startup had developed a small, wireless AI camera that ran exclusively on solar power. Regardless of whether Apple is more interested in its edge computing algorithm, as was seen working on Wyze cameras for a short time, or its clever hardware ideas around AI-powered cameras, it’s getting all of it with the purchase.

Source: Apple’s latest AI acquisition leaves some Wyze cameras without people detection – The Verge

A floating device created to clean up plastic from the ocean is finally doing its job, organizers say

A huge trash-collecting system designed to clean up plastic floating in the Pacific Ocean is finally picking up plastic, its inventor announced Wednesday.

The Netherlands-based nonprofit the Ocean Cleanup says its latest prototype was able to capture and hold debris ranging in size from huge, abandoned fishing gear, known as “ghost nets,” to tiny microplastics as small as 1 millimeter.
“Today, I am very proud to share with you that we are now catching plastics,” Ocean Cleanup founder and CEO Boyan Slat said at a news conference in Rotterdam.
The Ocean Cleanup system is a U-shaped barrier with a net-like skirt that hangs below the surface of the water. It moves with the current and collects faster moving plastics as they float by. Fish and other animals will be able to swim beneath it.
The new prototype added a parachute anchor to slow the system and increased the size of a cork line on top of the skirt to keep the plastic from washing over it.
The Ocean Cleanup's System 001/B collects and holds plastic until a ship can collect it.

It’s been deployed in “The Great Pacific Garbage Patch” — a concentration of trash located between Hawaii and California that’s about double the size of Texas, or three times the size of France.
Ocean Cleanup plans to build a fleet of these devices, and predicts it will be able to reduce the size of the patch by half every five years.

Source: A floating device created to clean up plastic from the ocean is finally doing its job, organizers say – CNN

Skype and Cortana audio listened in on by workers in China with ‘no security measures’

A Microsoft programme to transcribe and vet audio from Skype and Cortana, its voice assistant, ran for years with “no security measures”, according to a former contractor who says he reviewed thousands of potentially sensitive recordings on his personal laptop from his home in Beijing over the two years he worked for the company.

The recordings, both deliberate and accidentally invoked activations of the voice assistant, as well as some Skype phone calls, were simply accessed by Microsoft workers through a web app running in Google’s Chrome browser, on their personal laptops, over the Chinese internet, according to the contractor.

Workers had no cybersecurity help to protect the data from criminal or state interference, and were even instructed to do the work using new Microsoft accounts all with the same password, for ease of management, the former contractor said. Employee vetting was practically nonexistent, he added.

“There were no security measures, I don’t even remember them doing proper KYC [know your customer] on me. I think they just took my Chinese bank account details,” he told the Guardian.

While the grader began by working in an office, he said the contractor that employed him “after a while allowed me to do it from home in Beijing. I judged British English (because I’m British), so I listened to people who had their Microsoft device set to British English, and I had access to all of this from my home laptop with a simple username and password login.” Both username and password were emailed to new contractors in plaintext, he said, with the former following a simple schema and the latter being the same for every employee who joined in any given year.

“They just give me a login over email and I will then have access to Cortana recordings. I could then hypothetically share this login with anyone,” the contractor said. “I heard all kinds of unusual conversations, including what could have been domestic violence. It sounds a bit crazy now, after educating myself on computer security, that they gave me the URL, a username and password sent over email.”

As well as the risks of a rogue employee saving user data themselves or accessing voice recordings on a compromised laptop, Microsoft’s decision to outsource some of the work vetting English recordings to companies based in Beijing raises the additional prospect of the Chinese state gaining access to recordings. “Living in China, working in China, you’re already compromised with nearly everything,” the contractor said. “I never really thought about it.”

Source: Skype audio graded by workers in China with ‘no security measures’ | Technology | The Guardian

Spectrum Kills Home Security Business, Refuses Refunds for Owners of Now-Worthless Equipment, shows you why cloud based hardware isn’t the best idea

Spectrum customers who are also users of the company’s home security service are about a month away from being left with a pile of useless equipment that in many cases cost them hundreds of dollars.

On February 5, Spectrum will no longer support customers who’ve purchased its Spectrum Home Security equipment. None of the devices—the cameras, motion sensors, smart thermostats, and in-home touchscreens—can be paired with other existing services. In a few weeks, it’ll all be worthless junk.

While some of the devices may continue to function on their own, customers will soon no longer be able to access them using their mobile devices, which is sort of the whole point of owning a smart device.

On Friday, California’s KSBY News interviewed one Spectrum customer who said that he’d spent around $900 installing cameras and sensors in and around his Cheviot Hills home. That the equipment is soon-to-be worthless isn’t even the worst part. Spectrum is also running off with his money.

The customer reportedly contacted the company about converting the cost of his investment into credit toward his phone or cable bill. The company declined, he said.

Source: Spectrum Kills Home Security Business, Refuses Refunds for Owners of Now-Worthless Equipment

Hackers Are Breaking Directly Into Telecom Companies using RDP to Take Over Customer Phone Numbers themselves

Hackers are now getting telecom employees to run software that lets the hackers directly reach into the internal systems of U.S. telecom companies to take over customer cell phone numbers, Motherboard has learned. Multiple sources in and familiar with the SIM swapping community as well as screenshots shared with Motherboard suggest at least AT&T, T-Mobile, and Sprint have been impacted.

This is an escalation in the world of SIM swapping, in which hackers take over a target’s phone number so they can then access email, social media, or cryptocurrency accounts. Previously, these hackers have bribed telecom employees to perform SIM swaps or tricked workers to do so by impersonating legitimate customers over the phone or in person. Now, hackers are breaking into telecom companies, albeit crudely, to do the SIM swapping themselves.

[…]

The technique uses Remote Desktop Protocol (RDP) software. RDP lets a user control a computer over the internet rather than being physically in front of it. It’s commonly used for legitimate purposes such as customer support. But scammers also make heavy use of RDP. In an age-old scam, a fraudster will phone an ordinary consumer and tell them their computer is infected with malware. To fix the issue, the victim needs to enable RDP and let the fake customer support representative into their machine. From here, the scammer could do all sorts of things, such as logging into online bank accounts and stealing funds.

This use of RDP is essentially what SIM swappers are now doing. But instead of targeting consumers, they’re tricking telecom employees to install or activate RDP software, and then remotely reaching into the company’s systems to SIM swap individuals.

The process starts with convincing an employee in a telecom company’s customer support center to run or install RDP software. The active SIM swapper said they provide an employee with something akin to an employee ID, “and they believe it.” Hackers may also convince employees to provide credentials to a RDP service if they already use it.

[…]

Certain employees inside telecom companies have access to tools with the capability to ‘port’ someone’s phone number from one SIM to another. In the case of SIM swapping, this involves moving a victim’s number to a SIM card controlled by the hacker; with this in place, the hacker can then receive a victim’s two-factor authentication codes or password reset prompts via text message. These include T-Mobile’s tool dubbed QuickView; AT&T’s is called Opus.

The SIM swapper said one RDP tool used is Splashtop, which says on its website the product is designed to help “remotely support clients’ computers and servers.”

Source: Hackers Are Breaking Directly Into Telecom Companies to Take Over Customer Phone Numbers – VICE

Checkpeople, why is a 22GB database containing 56 million US folks’ aggregated personal details sitting on the open internet using a Chinese IP address?

A database containing the personal details of 56.25m US residents – from names and home addresses to phone numbers and ages – has been found on the public internet, served from a computer with a Chinese IP address, bizarrely enough.

The information silo appears to belong to Florida-based CheckPeople.com, which is a typical people-finder website: for a fee, you can enter someone’s name, and it will look up their current and past addresses, phone numbers, email addresses, names of relatives, and even criminal records in some cases, all presumably gathered from public records.

However, all of this information is not only sitting in one place for spammers, miscreants, and other netizens to download in bulk, but it’s being served from an IP address associated with Alibaba’s web hosting wing in Hangzhou, east China, for reasons unknown. It’s a perfect illustration that not only is this sort of personal information in circulation, but it’s also in the hands of foreign adversaries.

It just goes to show how haphazardly people’s privacy is treated these days.

A white-hat hacker operating under the handle Lynx discovered the trove online, and tipped off The Register. He told us he found the 22GB database exposed on the internet, including metadata that links the collection to CheckPeople.com. We have withheld further details of the security blunder for privacy protection reasons.

The repository’s contents are likely scraped from public records, though together provide rather detailed profiles on tens of millions of folks in America. Basically, CheckPeople.com has done the hard work of aggregating public personal records, and this exposed NoSQL database makes that info even easier to crawl and process.

Source: Why is a 22GB database containing 56 million US folks’ personal details sitting on the open internet using a Chinese IP address? Seriously, why? • The Register

FBI Surveillance Vendor Threatens to Sue Tech Reporters for Heinous Crime of Reporting on tombstones, tree stumps and vacuum cleaners they sell with spy cams in them

Motherboard on Thursday revealed that a “secretive” U.S. government vendor whose surveillance products are not publicly advertised has been marketing hidden cameras disguised as seemingly ordinary objects—vacuum cleaners, tree stumps, and tombstones—to the Federal Bureau of Investigation, among other law enforcement agencies, and the military, in addition to, ahem, “select clients.”

Yes, that’s tombstone cams, because absolutely nothing in this world is sacred.

Illustration for article titled FBI Surveillance Vendor Threatens to Sue Tech Reporters for Heinous Crime of Doing Journalism
Screenshot: Motherboard

 

The vendor, Special Services Group (SSG), was apparently none too pleased when Motherboard revealed that it planned to publish photographs and descriptions of the company’s surveillance toys. When reached for comment, SSG reportedly threatened to sue the tech publication, launched by VICE in 2009.

According to Motherboard, a brochure listing SSG’s products (starting at link from page 93) was obtained through public records requests filed with the Irvine Police Department in California.

Freddy Martinez, a policy analyst at government accountability group Open The Government, and Beryl Lipton, a reporter/researcher at the government transparency nonprofit MuckRock, both filed requests and obtained the SSG brochure, Motherboard said.

In warning the site not to disclose the brochure, SSG’s attorney reportedly claimed the document is protected under the International Traffic in Arms Regulations (ITAR), though the notice did not point to any specific section of the law, which was enacted to regulate arms exports at the height of the Cold War.

ITAR does prohibit the public disclosure of certain technical data related to military munitions. It’s unlikely, however, that a camera designed to look like a baby car seat—an actual SSG product called a “Rapid Vehicle Deployment Kit”—is covered under the law, which encompasses a wide range of actual military equipment that can’t be replicated in a home garage, such as space launch vehicles, nuclear reactors, and anti-helicopter mines.

ITAR explicitly does not cover “basic marketing information” or information “generally accessible or available to the public.”

Source: FBI Surveillance Vendor Threatens to Sue Tech Reporters for Heinous Crime of Doing Journalism

Lawsuit against cinema for refusing cash – and thus slurping private data

Michiel Jonker from Arnhem has sued a cinema that has moved location and since then refuses to accept cash at the cash register. All payments have to be made by pin. Jonker feels that this forces visitors to allow the cinema to process personal data.

He tried something of the sort in 2018 which was turned down as the personal data authority in NL decided that no-one was required to accept cash as legal tender.

Jonker is now saying that it should be if the data can be used to profile his movie preferences afterwards.

Good luck to him, I agree that cash is legal tender and the move to a cash free society is a privacy nightmare and potentially disastrous – see Hong Kong, for example.

Source: Rechtszaak tegen weigering van contant geld door bioscoop – Emerce

A Closer Look Into Neon and Its Artificial Humans

In short, a Neon is an artificial intelligence in the vein of Halo’s Cortana or Red Dwarf’s Holly, a computer-generated life form that can think and learn on its own, control its own virtual body, has a unique personality, and retains its own set of memories, or at least that’s the goal. A Neon doesn’t have a physical body (aside from the processor and computer components that its software runs on), so in a way, you can sort of think of a Neon as a cyberbrain from Ghost in the Shell too. Mistry describes Neon as a way to discover the “soul of tech.”

Here’s a look at three Neons, two of which were part of Mistry’s announcement presentation at CES.
Here’s a look at three Neons, two of which were part of Mistry’s announcement presentation at CES.
Graphic: Neon

Whatever.

But unlike a lot of the AIs we interact with today, like Siri and Alexa, Neon’s aren’t digital assistants. They weren’t created specifically to help humans and they aren’t supposed to be all-knowing. They are fallible and have emotions, possibly even free will, and presumably, they have the potential to die. Though that last one isn’t quite clear.

OK, but those things look A LOT like humans. What’s the deal?

That’s because Neons were originally modeled on humans. The company used computers to record different people’s faces, expressions, and bodies, and then all that info was rolled into a platform called Core R3, which forms the basis of how Neons appear to look, move, and react so naturally.

Mistry showed how Neon starting out by recording human movements, before transitioning to have Neon’s Core R3 engine generate animations on its own.
Mistry showed how Neon starting out by recording human movements, before transitioning to have Neon’s Core R3 engine generate animations on its own.
Photo: Sam Rutherford (Gizmodo)

If you break it down even further, the three Rs in Core R3 stand for reality, realtime, and responsiveness, each R representing a major tenet for what defines a Neon. Reality is meant to show that a Neon is it’s own thing, and not simply a copy or motion capture footage from an actor or something else. Realtime is supposed to signify that a Neon isn’t just a preprogrammed line of code, scripted to perform a certain task without variation like you would get from a robot. Finally, the part about responsiveness represents that Neons, like humans, can react to stimuli, with Mistry claiming latency as low as a few milliseconds.

Whoo, that’s quite a doozy. Is that it?

Illustration for article titled WTF Is an Artificial Human and Where Did They Come From?
Photo: Sam Rutherford (Gizmodo)

Oh, I see, a computer-generated human simulation with emotions, free will, and the ability to die isn’t enough for you? Well, there’s also Spectra, which is Neon’s (the company) learning platform that’s designed to teach Neons (the artificial humans) how to learn new skills, develop emotions, retain memories, and more. It’s the other half of the puzzle. Core R3 is responsible for the look, mannerisms, and animations of a Neon’s general appearance, including their voice. Spectra is responsible for a Neon’s personality and intelligence.

Oh yeah, did we mention they can talk too?

So is Neon Skynet?

Yes. No. Maybe. It’s too early to tell.

That all sounds nice, but what actually happened at Neon’s CES presentation?

After explaining the concept behind Neon’s artificial humans and how the company started off creating their appearance by recording and modeling humans, Mistry showed how after becoming adequately sophisticated, Core R3 engine allows a Neon to animate a realistic-looking avatar on its own.

From left to right, meet Karen, Cathy, and Maya.
From left to right, meet Karen, Cathy, and Maya.
Photo: Sam Rutherford (Gizmodo)

Then, Mistry and another Neon employee attempted to present a live demo of a Neon’s abilities, which is sort of when things went awry. To Neon’s credit, Mistry did preface everything by saying the tech is still very early, and given the complexity of the task and issues with doing a live demo at CES, it’s not really a surprise the Neon team ran into technical difficulties.

At first, the demo went smooth, as Mistry introduced three Neons whose avatars were displayed in a row of nearby displays: Karen, an airline worker, Cathy, a yoga instructor, and Maya, a student. From there, each Neon was commanded to perform various things like laugh, smile, and talk, through controls on a nearby tablet. To be clear, in this case, the Neons weren’t moving on their own but were manually controlled to demonstrate the lifelike mannerisms.

If you’re thinking a digital version of the creepy Sophia-bot you’re not far off.

For the most part, each Neon did appear quite realistic, avoiding nearly all the awkwardness you get from even high-quality CGI like the kind Disney used animate young Princess Leia in recent Star Wars movies. In fact, when the Neons were asked to move and laugh, the crowd at Neon’s booth let out a small murmur of shock and awe (and maybe fear).

From there, Mistry introduced a fourth Neon along with a visualization of the Neon’s neural network, which is essentially an image of its brain. And after getting the Neon to talk in English, Chinese, and Korean (which sounded a bit robotic and less natural than what you’d hear from Alexa or the Google Assistant), Mistry attempted to demo even more actions. But that’s when the demo seemed to freeze, with the Neon not responding properly to commands.

Illustration for article titled WTF Is an Artificial Human and Where Did They Come From?

At this point, Mistry apologized to the crowd and promised that the team would work on fixings things so it could run through more in-depth demos later this week. I’m hoping to revisit the Neon booth to see if that’s the case, so stay tuned for potential updates.

So what’s the actual product? There’s a product, right?

Yes, or at least there will be eventually. Right now, even in such an early state, Mistry said he just wanted to share his work with the world. However, sometime near the end of 2020, Neon plans to launch a beta version of the Neon software at Neon World 2020, a convention dedicated to all things Neon. This software will feature Core R3 and will allow users to tinker with making their own Neons, while Neon the company continues to work on developing its Spectra software to give Neon’s life and emotion.

How much will Neon cost? What is Neon’s business model?

Supposedly there isn’t one. Mistry says that instead of worrying about how to make money, he just wants Neon to “make a positive impact.” That said, Mistry also mentioned that Neon (the platform) would be made available to business partners, who may be able to tweak the Neon software to sell things or serve in call centers or something. The bottom line is this: If Neon can pull off what it’s aiming to pull off, there would be a healthy business in replacing countless service workers.

Can I fuck a Neon?

Neons are going to be our friends.
Neons are going to be our friends.
Photo: Sam Rutherford (Gizmodo)

Get your mind out of the gutter. But at some point, probably yes. Everything we do eventually comes around to sex, right? Furthermore, this does bring up some interesting concerns about consent.

How can I learn more?

Go to Neon.life.

Really?

Really.

So what happens next?

Neon are going to Neon, I don’t know. I’m a messenger trying to explain the latest chapter of CES quackery. Don’t get me wrong, the idea behind Neon is super interesting and is something sci-fi writers have been writing about for decades. But for right now, it’s not even clear how legit all this is.

Here are some of the core building blocks of Neon’s software.
Here are some of the core building blocks of Neon’s software.
Photo: Sam Rutherford (Gizmodo)

It’s unclear how much a Neon can do on its own, and how long it will take for Neon to live up to its goal of creating a truly independent artificial human. What is really real? It’s weird, ambitious, and could be the start of a new era in human development. For now? It’s still quackery.

Source: A Closer Look Into Neon and Its Artificial Humans

Amazon fired four workers who secretly snooped on Ring doorbell camera footage

Amazon’s Ring home security camera biz says it has fired multiple employees caught covertly watching video feeds from customer devices.

The admission came in a letter [PDF] sent in response to questions raised by US Senators critical of Ring’s privacy practices.

Ring recounted how, on four separate occasions, workers were let go for overstepping their access privileges and poring over customer video files and other data inappropriately.

“Over the last four years, Ring has received four complaints or inquiries regarding a team member’s access to Ring video data,” the gizmo flinger wrote.

“Although each of the individuals involved in these incidents was authorized to view video data, the attempted access to that data exceeded what was necessary for their job functions.

“In each instance, once Ring was made aware of the alleged conduct, Ring promptly investigated the incident, and after determining that the individual violated company policy, terminated the individual.”

This comes as Amazon attempts to justify its internal policies, particularly employee access to user video information for support and research-and-development purposes.

Source: Ring of fired: Amazon axes four workers who secretly snooped on netizens’ surveillance camera footage • The Register

Delta and Misapplied Sciences introduce parallel reality – a display that shows different content to different people at the same time without augmentation

In a ritual I’ve undertaken at least a thousand times, I lift my head to consult an airport display and determine which gate my plane will depart from. Normally, that involves skimming through a sprawling list of flights to places I’m not going. This time, however, all I see is information meant just for me:

Hello Harry
Flight DL42 to SEA boards in 33 min
Gate C11, 16 min walk
Proceed to Checkpoint 2

Stranger still, a leather-jacketed guy standing next to me is looking at the same display at the same time—and all he sees is his own travel information:

Hello Albert
Flight DL11 to ATL boards in 47 min
Gate C26, 25 min walk
Proceed to Checkpoint 4

Okay, confession time: I’m not at an airport. Instead, I’m visiting the office of Misapplied Sciences, a Redmond, Washington, startup located in a dinky strip mall whose other tenants include a teppanyaki joint and a children’s hair salon. Albert is not another traveler but rather the company’s cofounder and CEO, Albert Ng. We’ve been play-acting our way through a demo of the company’s display, which can show different things to different people at one time—no special glasses, smartphone-camera trickery, or other intermediary technology required. The company calls it parallel reality.

The simulated airport terminal is only one of the scenarios that Ng and his cofounder Dave Thompson show off for me in their headquarters. They also set up a mock store with a Pikachu doll, a Katy Perry CD, a James Bond DVD, and other goods, all in front of one screen. When I glance up at it, I see video related to whichever item I’m standing near. In a makeshift movie theater, I watch The Sound of Music with closed captions in English on a display above the movie screen, while Ng sits one seat over and sees Chinese captions on the same display. And I flick a wand to control colored lights on Seattle’s Space Needle (or for the sake of the demo, a large poster of it).

At one point, just to definitively prove that their screen can show multiple images at once, Ng and Thompson push a grid of mirrors up in front of it. Even though they’re all reflecting the same screen, each shows an animated sequence based on the flag or map of a different country.
[…]
The potential applications for the technology—from outdoor advertising to traffic signs to theme-park entertainment—are many. But if all goes according to plan, the first consumers who will see it in action will be travelers at the Detroit Metropolitan Airport. Starting in the middle of this year, Delta Air Lines plans to offer parallel-reality signage, located just past TSA, that can simultaneously show almost 100 customers unique information on their flights, once they’ve scanned their boarding passes. Available in English, Spanish, Japanese, Korean, and other languages, it will be a slicked-up, real-world deployment of the demo I got in Redmond.
[…]

At a January 2014 hackathon, a researcher named Paul Dietz came up with an idea to synchronize crowds in stadiums via a smartphone app that gave individual spectators cues to stand up, sit down, or hold up a card. The idea was to “use people as pixels,” he says, by turning the entire audience into a giant, human-powered animated display. It worked. “But the participants complained that they were so busy looking at their phones, they couldn’t enjoy the effect,” Dietz remembers.

That led him to wonder if there was a more elegant way to signal individuals in a crowd, such as beaming different colors to different people. As part of this investigation, he set up a pocket projector in an atrium and projected stripes of red and green. “The projector was very dim,” he says. “But when I looked into it from across the atrium, it was this beautiful, bright, saturated green light. Then I moved over a few inches into a red stripe, and then it looked like an intense red light.”

Based on this discovery, Dietz concluded that it might be possible to create displays that precisely aimed differing images at people depending on their position. Later in 2014, that epiphany gave birth to Misapplied Sciences, which he cofounded with Ng—who’d been his Microsoft intern while studying high-performance computing at Stanford—and Thompson, whom Dietz had met when both were creating theme-park experiences at Walt Disney Imagineering.

[…]

the basic principle—directing different colors in different directions—remains the same. With garden-variety screens, the whole idea is to create a consistent picture, and the wider the viewing angle, the better. By contrast, with Misapplied’s displays, “at one time, a single pixel can emit green light towards you,” says Ng. “Whereas simultaneously that same pixel can emit red light to the person next to you.”

The parallel-reality effect is all in the pixels. [Image: courtesy of Misapplied Sciences]

In one version of the tech, it can control the display in 18,000 directions; in another, meant for large-scale outdoor signage, it can control it in a million. The company has engineered display modules that can be arranged, Lego-like, in different configurations that allow for signage of varying sizes and shapes. A Windows PC performs the heavy computational lifting, and there’s software that lets a user assign different images to different viewing positions by pointing and clicking. As displays reach the market, Ng says that the price will “rival that of advanced LED video walls.” Not cheap, maybe, but also not impossibly stratospheric.

For all its science-fiction feel, parallel reality does have its gotchas, at least in its current incarnation. In the demos I saw, the pixels were blocky, with a noticeable amount of space around them—plus black bezels around the modules that make up a sign—giving the displays a look reminiscent of a sporting-arena electronic sign from a few generations back. They’re also capable of generating only 256 colors, so photos and videos aren’t exactly hyperrealistic. Perhaps the biggest wrinkle is that you need to stand at least 15 feet back for the parallel-reality effect to work. (Venture too close, and you see one mishmashed image.)

[…]

The other part of the equation is figuring out which traveler is standing where, so people see their own flight details. Delta is accomplishing that with a bit of AI software and some ceiling-mounted cameras. When you scan your boarding pass, you get associated with your flight info—not through facial recognition, but simply as a discrete blob in the cameras’ view. As you roam near the parallel-reality display, the software keeps tabs on your location, so that the signage can point your information at your precise spot.

Delta is taking pains to alleviate any privacy concerns relating to this system. “It’s all going to be housed on Delta systems and Delta software, and it’s always going to be opt-in,” says Robbie Schaefer, general manager of Delta’s airport customer experience. The software won’t store anything once a customer moves on, and the display won’t display any highly sensitive information. (It’s possible to steal a peek at other people’s displays, but only by invading their personal space—which is what I did to Ng, at his invitation, to see for myself.)

The other demos I witnessed at Misapplied’s office involved less tracking of individuals and handling of their personal data. In the retail-store scenario, for instance, all that mattered was which product I was standing in front of. And in the captioning one, the display only needed to know what language to display for each seat, which involved audience members using a smartphone app to scan a QR code on their seat and then select a language.

Source: Delta and Misapplied Sciences introduce parallel reality

A Bed That Cools and Heats Each Sleeper Separately, sets the softness per side and also adjusts automatically to silence snorers

Sleep Number first made a name for itself with its line of adjustable air-filled mattresses that allowed a pair of sleepers to each select how firm or soft they wanted their side of the bed to be. The preferred setting was known as a user’s Sleep Number, and over the years the company has introduced many ways to make it easier to fine-tune its beds for a good night’s sleep, including its smart SleepIQ technology which tracks movements and breathing patterns to help narrow down which comfort settings are ideal, as well as automatic adjustments in the middle of the night to silence a snorer.

At CES 2017, the company’s Sleep Number 360 bed introduced a new feature that learned each user’s bedtime routines and then automatically pre-heated the foot of the bed to a specific temperature to make falling asleep easier and more comfortable. At CES 2020, the company is now expanding on that idea with its new Climate360 smart bed that can heat and cool the entire mattress based on each user’s dozing preferences.

Using a combination of sensors, advanced textiles, phase change materials (a material that can absorb or release energy to aid in heating and cooling), evaporative cooling, and a ventilation system, the Climate360 bed can supposedly create and maintain a separate microclimate on each side of the bed, and make adjustments throughout the night based on each sleeper’s movements which indicate a level of discomfort. What isn’t built into the bed is a full air conditioning system, however, so the bed can only cool each side by about 12 degrees, but is able to warm them up to 100 degrees Fahrenheit if you prefer to sleep in an inferno.

The Climate360 bed goes through automatic routines throughout the night that Sleep Number has determined to be ideal for achieving a more restful sleep, including gently warming the bed ahead of bedtime to make it easier to drift off, and then cooling it once each user is asleep to help keep them comfortable.

Source: A Bed That Cools and Heats Each Sleeper Separately Will Save Countless Relationships