Earlier today, Apple confirmed it purchased Seattle-based AI company Xnor.ai (via MacRumors). Acquisitions at Apple’s scale happen frequently, though rarely do they impact everyday people on the day of their announcement. This one is different.
Cameras from fellow Seattle-based company Wyze, including the Wyze Cam V2 and Wyze Cam Pan, have utilized Xnor.ai’s on-device people detection since last summer. But now that Apple owns the company, it’s no longer available. Some people on Wyze’s forum are noting that the beta firmware removing the people detection has already started to roll out.
Oddly enough, word of this lapse in service isn’t anything new. Wyze issued a statement in November 2019 saying that Xnor.ai had terminated their contract (though its reason for doing so wasn’t as clear then as it is today), and that a firmware update slated for mid-January 2020 would remove the feature from those cameras.
There’s a bright side to this loss, though, even if Apple snapping up Xnor.ai makes Wyze’s affordable cameras less appealing in the interim. Wyze says that it’s working on its own in-house version of people detection for launch at some point this year. And whether it operates on-device via “edge AI” computing like Xnor.ai’s does, or by authenticating through the cloud, it will be free for users when it launches.
That’s good and all, but the year just started, and it’s a little worrying Wyze hasn’t followed up with a specific time frame for its replacement of the feature. Two days ago, Wyze’s social media community manager stated that the company was “making great progress” on its forums, but they didn’t offer up when it would be available.
As for what Apple plans to do with Xnor.ai is anyone’s guess. Ahead of its partnership with Wyze, the AI startup had developed a small, wireless AI camera that ran exclusively on solar power. Regardless of whether Apple is more interested in its edge computing algorithm, as was seen working on Wyze cameras for a short time, or its clever hardware ideas around AI-powered cameras, it’s getting all of it with the purchase.
A huge trash-collecting system designed to clean up plastic floating in the Pacific Ocean is finally picking up plastic, its inventor announced Wednesday.
The Netherlands-based nonprofit the Ocean Cleanup says its latest prototype was able to capture and hold debris ranging in size from huge, abandoned fishing gear, known as “ghost nets,” to tiny microplastics as small as 1 millimeter.
“Today, I am very proud to share with you that we are now catching plastics,” Ocean Cleanup founder and CEO Boyan Slat said at a news conference in Rotterdam.
The Ocean Cleanup system is a U-shaped barrier with a net-like skirt that hangs below the surface of the water. It moves with the current and collects faster moving plastics as they float by. Fish and other animals will be able to swim beneath it.
The new prototype added a parachute anchor to slow the system and increased the size of a cork line on top of the skirt to keep the plastic from washing over it.
The Ocean Cleanup’s System 001/B collects and holds plastic until a ship can collect it.
It’s been deployed in “The Great Pacific Garbage Patch” — a concentration of trash located between Hawaii and California that’s about double the size of Texas, or three times the size of France.
Ocean Cleanup plans to build a fleet of these devices, and predicts it will be able to reduce the size of the patch by half every five years.
A Microsoft programme to transcribe and vet audio from Skype and Cortana, its voice assistant, ran for years with “no security measures”, according to a former contractor who says he reviewed thousands of potentially sensitive recordings on his personal laptop from his home in Beijing over the two years he worked for the company.
The recordings, both deliberate and accidentally invoked activations of the voice assistant, as well as some Skype phone calls, were simply accessed by Microsoft workers through a web app running in Google’s Chrome browser, on their personal laptops, over the Chinese internet, according to the contractor.
Workers had no cybersecurity help to protect the data from criminal or state interference, and were even instructed to do the work using new Microsoft accounts all with the same password, for ease of management, the former contractor said. Employee vetting was practically nonexistent, he added.
“There were no security measures, I don’t even remember them doing proper KYC [know your customer] on me. I think they just took my Chinese bank account details,” he told the Guardian.
While the grader began by working in an office, he said the contractor that employed him “after a while allowed me to do it from home in Beijing. I judged British English (because I’m British), so I listened to people who had their Microsoft device set to British English, and I had access to all of this from my home laptop with a simple username and password login.” Both username and password were emailed to new contractors in plaintext, he said, with the former following a simple schema and the latter being the same for every employee who joined in any given year.
“They just give me a login over email and I will then have access to Cortana recordings. I could then hypothetically share this login with anyone,” the contractor said. “I heard all kinds of unusual conversations, including what could have been domestic violence. It sounds a bit crazy now, after educating myself on computer security, that they gave me the URL, a username and password sent over email.”
As well as the risks of a rogue employee saving user data themselves or accessing voice recordings on a compromised laptop, Microsoft’s decision to outsource some of the work vetting English recordings to companies based in Beijing raises the additional prospect of the Chinese state gaining access to recordings. “Living in China, working in China, you’re already compromised with nearly everything,” the contractor said. “I never really thought about it.”
Spectrum customers who are also users of the company’s home security service are about a month away from being left with a pile of useless equipment that in many cases cost them hundreds of dollars.
On February 5, Spectrum will no longer support customers who’ve purchased its Spectrum Home Security equipment. None of the devices—the cameras, motion sensors, smart thermostats, and in-home touchscreens—can be paired with other existing services. In a few weeks, it’ll all be worthless junk.
While some of the devices may continue to function on their own, customers will soon no longer be able to access them using their mobile devices, which is sort of the whole point of owning a smart device.
On Friday, California’s KSBY News interviewed one Spectrum customer who said that he’d spent around $900 installing cameras and sensors in and around his Cheviot Hills home. That the equipment is soon-to-be worthless isn’t even the worst part. Spectrum is also running off with his money.
The customer reportedly contacted the company about converting the cost of his investment into credit toward his phone or cable bill. The company declined, he said.
Hackers are now getting telecom employees to run software that lets the hackers directly reach into the internal systems of U.S. telecom companies to take over customer cell phone numbers, Motherboard has learned. Multiple sources in and familiar with the SIM swapping community as well as screenshots shared with Motherboard suggest at least AT&T, T-Mobile, and Sprint have been impacted.
This is an escalation in the world of SIM swapping, in which hackers take over a target’s phone number so they can then access email, social media, or cryptocurrency accounts. Previously, these hackers have bribed telecom employees to perform SIM swaps or tricked workers to do so by impersonating legitimate customers over the phone or in person. Now, hackers are breaking into telecom companies, albeit crudely, to do the SIM swapping themselves.
[…]
The technique uses Remote Desktop Protocol (RDP) software. RDP lets a user control a computer over the internet rather than being physically in front of it. It’s commonly used for legitimate purposes such as customer support. But scammers also make heavy use of RDP. In an age-old scam, a fraudster will phone an ordinary consumer and tell them their computer is infected with malware. To fix the issue, the victim needs to enable RDP and let the fake customer support representative into their machine. From here, the scammer could do all sorts of things, such as logging into online bank accounts and stealing funds.
This use of RDP is essentially what SIM swappers are now doing. But instead of targeting consumers, they’re tricking telecom employees to install or activate RDP software, and then remotely reaching into the company’s systems to SIM swap individuals.
The process starts with convincing an employee in a telecom company’s customer support center to run or install RDP software. The active SIM swapper said they provide an employee with something akin to an employee ID, “and they believe it.” Hackers may also convince employees to provide credentials to a RDP service if they already use it.
[…]
Certain employees inside telecom companies have access to tools with the capability to ‘port’ someone’s phone number from one SIM to another. In the case of SIM swapping, this involves moving a victim’s number to a SIM card controlled by the hacker; with this in place, the hacker can then receive a victim’s two-factor authentication codes or password reset prompts via text message. These include T-Mobile’s tool dubbed QuickView; AT&T’s is called Opus.
The SIM swapper said one RDP tool used is Splashtop, which says on its website the product is designed to help “remotely support clients’ computers and servers.”
A database containing the personal details of 56.25m US residents – from names and home addresses to phone numbers and ages – has been found on the public internet, served from a computer with a Chinese IP address, bizarrely enough.
The information silo appears to belong to Florida-based CheckPeople.com, which is a typical people-finder website: for a fee, you can enter someone’s name, and it will look up their current and past addresses, phone numbers, email addresses, names of relatives, and even criminal records in some cases, all presumably gathered from public records.
However, all of this information is not only sitting in one place for spammers, miscreants, and other netizens to download in bulk, but it’s being served from an IP address associated with Alibaba’s web hosting wing in Hangzhou, east China, for reasons unknown. It’s a perfect illustration that not only is this sort of personal information in circulation, but it’s also in the hands of foreign adversaries.
It just goes to show how haphazardly people’s privacy is treated these days.
A white-hat hacker operating under the handle Lynx discovered the trove online, and tipped off The Register. He told us he found the 22GB database exposed on the internet, including metadata that links the collection to CheckPeople.com. We have withheld further details of the security blunder for privacy protection reasons.
The repository’s contents are likely scraped from public records, though together provide rather detailed profiles on tens of millions of folks in America. Basically, CheckPeople.com has done the hard work of aggregating public personal records, and this exposed NoSQL database makes that info even easier to crawl and process.
Motherboard on Thursday revealed that a “secretive” U.S. government vendor whose surveillance products are not publicly advertised has been marketing hidden cameras disguised as seemingly ordinary objects—vacuum cleaners, tree stumps, and tombstones—to the Federal Bureau of Investigation, among other law enforcement agencies, and the military, in addition to, ahem, “select clients.”
Yes, that’s tombstone cams, because absolutely nothing in this world is sacred.
The vendor, Special Services Group (SSG), was apparently none too pleased when Motherboard revealed that it planned to publish photographs and descriptions of the company’s surveillance toys. When reached for comment, SSG reportedly threatened to sue the tech publication, launched by VICE in 2009.
According to Motherboard, a brochure listing SSG’s products (starting at link from page 93) was obtained through public records requests filed with the Irvine Police Department in California.
Freddy Martinez, a policy analyst at government accountability group Open The Government, and Beryl Lipton, a reporter/researcher at the government transparency nonprofit MuckRock, both filed requests and obtained the SSG brochure, Motherboard said.
In warning the site not to disclose the brochure, SSG’s attorney reportedly claimed the document is protected under the International Traffic in Arms Regulations (ITAR), though the notice did not point to any specific section of the law, which was enacted to regulate arms exports at the height of the Cold War.
ITAR does prohibit the public disclosure of certain technical data related to military munitions. It’s unlikely, however, that a camera designed to look like a baby car seat—an actual SSG product called a “Rapid Vehicle Deployment Kit”—is covered under the law, which encompasses a wide range of actual military equipment that can’t be replicated in a home garage, such as space launch vehicles, nuclear reactors, and anti-helicopter mines.
Michiel Jonker from Arnhem has sued a cinema that has moved location and since then refuses to accept cash at the cash register. All payments have to be made by pin. Jonker feels that this forces visitors to allow the cinema to process personal data.
He tried something of the sort in 2018 which was turned down as the personal data authority in NL decided that no-one was required to accept cash as legal tender.
Jonker is now saying that it should be if the data can be used to profile his movie preferences afterwards.
Good luck to him, I agree that cash is legal tender and the move to a cash free society is a privacy nightmare and potentially disastrous – see Hong Kong, for example.
In short, a Neon is an artificial intelligence in the vein of Halo’s Cortana or Red Dwarf’s Holly, a computer-generated life form that can think and learn on its own, control its own virtual body, has a unique personality, and retains its own set of memories, or at least that’s the goal. A Neon doesn’t have a physical body (aside from the processor and computer components that its software runs on), so in a way, you can sort of think of a Neon as a cyberbrain from Ghost in the Shell too. Mistry describes Neon as a way to discover the “soul of tech.”
Here’s a look at three Neons, two of which were part of Mistry’s announcement presentation at CES.
Graphic: Neon
Whatever.
But unlike a lot of the AIs we interact with today, like Siri and Alexa, Neon’s aren’t digital assistants. They weren’t created specifically to help humans and they aren’t supposed to be all-knowing. They are fallible and have emotions, possibly even free will, and presumably, they have the potential to die. Though that last one isn’t quite clear.
OK, but those things look A LOT like humans. What’s the deal?
That’s because Neons were originally modeled on humans. The company used computers to record different people’s faces, expressions, and bodies, and then all that info was rolled into a platform called Core R3, which forms the basis of how Neons appear to look, move, and react so naturally.
Mistry showed how Neon starting out by recording human movements, before transitioning to have Neon’s Core R3 engine generate animations on its own.
Photo: Sam Rutherford (Gizmodo)
If you break it down even further, the three Rs in Core R3 stand for reality, realtime, and responsiveness, each R representing a major tenet for what defines a Neon. Reality is meant to show that a Neon is it’s own thing, and not simply a copy or motion capture footage from an actor or something else. Realtime is supposed to signify that a Neon isn’t just a preprogrammed line of code, scripted to perform a certain task without variation like you would get from a robot. Finally, the part about responsiveness represents that Neons, like humans, can react to stimuli, with Mistry claiming latency as low as a few milliseconds.
Whoo, that’s quite a doozy. Is that it?
Photo: Sam Rutherford (Gizmodo)
Oh, I see, a computer-generated human simulation with emotions, free will, and the ability to die isn’t enough for you? Well, there’s also Spectra, which is Neon’s (the company) learning platform that’s designed to teach Neons (the artificial humans) how to learn new skills, develop emotions, retain memories, and more. It’s the other half of the puzzle. Core R3 is responsible for the look, mannerisms, and animations of a Neon’s general appearance, including their voice. Spectra is responsible for a Neon’s personality and intelligence.
Oh yeah, did we mention they can talk too?
So is Neon Skynet?
Yes. No. Maybe. It’s too early to tell.
That all sounds nice, but what actually happened at Neon’s CES presentation?
After explaining the concept behind Neon’s artificial humans and how the company started off creating their appearance by recording and modeling humans, Mistry showed how after becoming adequately sophisticated, Core R3 engine allows a Neon to animate a realistic-looking avatar on its own.
From left to right, meet Karen, Cathy, and Maya.
Photo: Sam Rutherford (Gizmodo)
Then, Mistry and another Neon employee attempted to present a live demo of a Neon’s abilities, which is sort of when things went awry. To Neon’s credit, Mistry did preface everything by saying the tech is still very early, and given the complexity of the task and issues with doing a live demo at CES, it’s not really a surprise the Neon team ran into technical difficulties.
At first, the demo went smooth, as Mistry introduced three Neons whose avatars were displayed in a row of nearby displays: Karen, an airline worker, Cathy, a yoga instructor, and Maya, a student. From there, each Neon was commanded to perform various things like laugh, smile, and talk, through controls on a nearby tablet. To be clear, in this case, the Neons weren’t moving on their own but were manually controlled to demonstrate the lifelike mannerisms.
For the most part, each Neon did appear quite realistic, avoiding nearly all the awkwardness you get from even high-quality CGI like the kind Disney used animate young Princess Leia in recent Star Wars movies. In fact, when the Neons were asked to move and laugh, the crowd at Neon’s booth let out a small murmur of shock and awe (and maybe fear).
From there, Mistry introduced a fourth Neon along with a visualization of the Neon’s neural network, which is essentially an image of its brain. And after getting the Neon to talk in English, Chinese, and Korean (which sounded a bit robotic and less natural than what you’d hear from Alexa or the Google Assistant), Mistry attempted to demo even more actions. But that’s when the demo seemed to freeze, with the Neon not responding properly to commands.
At this point, Mistry apologized to the crowd and promised that the team would work on fixings things so it could run through more in-depth demos later this week. I’m hoping to revisit the Neon booth to see if that’s the case, so stay tuned for potential updates.
So what’s the actual product? There’s a product, right?
Yes, or at least there will be eventually. Right now, even in such an early state, Mistry said he just wanted to share his work with the world. However, sometime near the end of 2020, Neon plans to launch a beta version of the Neon software at Neon World 2020, a convention dedicated to all things Neon. This software will feature Core R3 and will allow users to tinker with making their own Neons, while Neon the company continues to work on developing its Spectra software to give Neon’s life and emotion.
How much will Neon cost? What is Neon’s business model?
Supposedly there isn’t one. Mistry says that instead of worrying about how to make money, he just wants Neon to “make a positive impact.” That said, Mistry also mentioned that Neon (the platform) would be made available to business partners, who may be able to tweak the Neon software to sell things or serve in call centers or something. The bottom line is this: If Neon can pull off what it’s aiming to pull off, there would be a healthy business in replacing countless service workers.
Neon are going to Neon, I don’t know. I’m a messenger trying to explain the latest chapter of CES quackery. Don’t get me wrong, the idea behind Neon is super interesting and is something sci-fi writers have been writing about for decades. But for right now, it’s not even clear how legit all this is.
Here are some of the core building blocks of Neon’s software.
Photo: Sam Rutherford (Gizmodo)
It’s unclear how much a Neon can do on its own, and how long it will take for Neon to live up to its goal of creating a truly independent artificial human. What is really real? It’s weird, ambitious, and could be the start of a new era in human development. For now? It’s still quackery.
Amazon’s Ring home security camera biz says it has fired multiple employees caught covertly watching video feeds from customer devices.
The admission came in a letter [PDF] sent in response to questions raised by US Senators critical of Ring’s privacy practices.
Ring recounted how, on four separate occasions, workers were let go for overstepping their access privileges and poring over customer video files and other data inappropriately.
“Over the last four years, Ring has received four complaints or inquiries regarding a team member’s access to Ring video data,” the gizmo flinger wrote.
“Although each of the individuals involved in these incidents was authorized to view video data, the attempted access to that data exceeded what was necessary for their job functions.
“In each instance, once Ring was made aware of the alleged conduct, Ring promptly investigated the incident, and after determining that the individual violated company policy, terminated the individual.”
This comes as Amazon attempts to justify its internal policies, particularly employee access to user video information for support and research-and-development purposes.
In a ritual I’ve undertaken at least a thousand times, I lift my head to consult an airport display and determine which gate my plane will depart from. Normally, that involves skimming through a sprawling list of flights to places I’m not going. This time, however, all I see is information meant just for me:
Hello Harry
Flight DL42 to SEA boards in 33 min
Gate C11, 16 min walk
Proceed to Checkpoint 2
Stranger still, a leather-jacketed guy standing next to me is looking at the same display at the same time—and all he sees is his own travel information:
Hello Albert
Flight DL11 to ATL boards in 47 min
Gate C26, 25 min walk
Proceed to Checkpoint 4
Okay, confession time: I’m not at an airport. Instead, I’m visiting the office of Misapplied Sciences, a Redmond, Washington, startup located in a dinky strip mall whose other tenants include a teppanyaki joint and a children’s hair salon. Albert is not another traveler but rather the company’s cofounder and CEO, Albert Ng. We’ve been play-acting our way through a demo of the company’s display, which can show different things to different people at one time—no special glasses, smartphone-camera trickery, or other intermediary technology required. The company calls it parallel reality.
The simulated airport terminal is only one of the scenarios that Ng and his cofounder Dave Thompson show off for me in their headquarters. They also set up a mock store with a Pikachu doll, a Katy Perry CD, a James Bond DVD, and other goods, all in front of one screen. When I glance up at it, I see video related to whichever item I’m standing near. In a makeshift movie theater, I watch The Sound of Music with closed captions in English on a display above the movie screen, while Ng sits one seat over and sees Chinese captions on the same display. And I flick a wand to control colored lights on Seattle’s Space Needle (or for the sake of the demo, a large poster of it).
At one point, just to definitively prove that their screen can show multiple images at once, Ng and Thompson push a grid of mirrors up in front of it. Even though they’re all reflecting the same screen, each shows an animated sequence based on the flag or map of a different country.
[…]
The potential applications for the technology—from outdoor advertising to traffic signs to theme-park entertainment—are many. But if all goes according to plan, the first consumers who will see it in action will be travelers at the Detroit Metropolitan Airport. Starting in the middle of this year, Delta Air Lines plans to offer parallel-reality signage, located just past TSA, that can simultaneously show almost 100 customers unique information on their flights, once they’ve scanned their boarding passes. Available in English, Spanish, Japanese, Korean, and other languages, it will be a slicked-up, real-world deployment of the demo I got in Redmond.
[…]
At a January 2014 hackathon, a researcher named Paul Dietz came up with an idea to synchronize crowds in stadiums via a smartphone app that gave individual spectators cues to stand up, sit down, or hold up a card. The idea was to “use people as pixels,” he says, by turning the entire audience into a giant, human-powered animated display. It worked. “But the participants complained that they were so busy looking at their phones, they couldn’t enjoy the effect,” Dietz remembers.
That led him to wonder if there was a more elegant way to signal individuals in a crowd, such as beaming different colors to different people. As part of this investigation, he set up a pocket projector in an atrium and projected stripes of red and green. “The projector was very dim,” he says. “But when I looked into it from across the atrium, it was this beautiful, bright, saturated green light. Then I moved over a few inches into a red stripe, and then it looked like an intense red light.”
Based on this discovery, Dietz concluded that it might be possible to create displays that precisely aimed differing images at people depending on their position. Later in 2014, that epiphany gave birth to Misapplied Sciences, which he cofounded with Ng—who’d been his Microsoft intern while studying high-performance computing at Stanford—and Thompson, whom Dietz had met when both were creating theme-park experiences at Walt Disney Imagineering.
[…]
the basic principle—directing different colors in different directions—remains the same. With garden-variety screens, the whole idea is to create a consistent picture, and the wider the viewing angle, the better. By contrast, with Misapplied’s displays, “at one time, a single pixel can emit green light towards you,” says Ng. “Whereas simultaneously that same pixel can emit red light to the person next to you.”
The parallel-reality effect is all in the pixels. [Image: courtesy of Misapplied Sciences]
In one version of the tech, it can control the display in 18,000 directions; in another, meant for large-scale outdoor signage, it can control it in a million. The company has engineered display modules that can be arranged, Lego-like, in different configurations that allow for signage of varying sizes and shapes. A Windows PC performs the heavy computational lifting, and there’s software that lets a user assign different images to different viewing positions by pointing and clicking. As displays reach the market, Ng says that the price will “rival that of advanced LED video walls.” Not cheap, maybe, but also not impossibly stratospheric.
For all its science-fiction feel, parallel reality does have its gotchas, at least in its current incarnation. In the demos I saw, the pixels were blocky, with a noticeable amount of space around them—plus black bezels around the modules that make up a sign—giving the displays a look reminiscent of a sporting-arena electronic sign from a few generations back. They’re also capable of generating only 256 colors, so photos and videos aren’t exactly hyperrealistic. Perhaps the biggest wrinkle is that you need to stand at least 15 feet back for the parallel-reality effect to work. (Venture too close, and you see one mishmashed image.)
[…]
The other part of the equation is figuring out which traveler is standing where, so people see their own flight details. Delta is accomplishing that with a bit of AI software and some ceiling-mounted cameras. When you scan your boarding pass, you get associated with your flight info—not through facial recognition, but simply as a discrete blob in the cameras’ view. As you roam near the parallel-reality display, the software keeps tabs on your location, so that the signage can point your information at your precise spot.
Delta is taking pains to alleviate any privacy concerns relating to this system. “It’s all going to be housed on Delta systems and Delta software, and it’s always going to be opt-in,” says Robbie Schaefer, general manager of Delta’s airport customer experience. The software won’t store anything once a customer moves on, and the display won’t display any highly sensitive information. (It’s possible to steal a peek at other people’s displays, but only by invading their personal space—which is what I did to Ng, at his invitation, to see for myself.)
The other demos I witnessed at Misapplied’s office involved less tracking of individuals and handling of their personal data. In the retail-store scenario, for instance, all that mattered was which product I was standing in front of. And in the captioning one, the display only needed to know what language to display for each seat, which involved audience members using a smartphone app to scan a QR code on their seat and then select a language.
Sleep Number first made a name for itself with its line of adjustable air-filled mattresses that allowed a pair of sleepers to each select how firm or soft they wanted their side of the bed to be. The preferred setting was known as a user’s Sleep Number, and over the years the company has introduced many ways to make it easier to fine-tune its beds for a good night’s sleep, including its smart SleepIQ technology which tracks movements and breathing patterns to help narrow down which comfort settings are ideal, as well as automatic adjustments in the middle of the night to silence a snorer.
At CES 2017, the company’s Sleep Number 360 bed introduced a new feature that learned each user’s bedtime routines and then automatically pre-heated the foot of the bed to a specific temperature to make falling asleep easier and more comfortable. At CES 2020, the company is now expanding on that idea with its new Climate360 smart bed that can heat and cool the entire mattress based on each user’s dozing preferences.
Using a combination of sensors, advanced textiles, phase change materials (a material that can absorb or release energy to aid in heating and cooling), evaporative cooling, and a ventilation system, the Climate360 bed can supposedly create and maintain a separate microclimate on each side of the bed, and make adjustments throughout the night based on each sleeper’s movements which indicate a level of discomfort. What isn’t built into the bed is a full air conditioning system, however, so the bed can only cool each side by about 12 degrees, but is able to warm them up to 100 degrees Fahrenheit if you prefer to sleep in an inferno.
The Climate360 bed goes through automatic routines throughout the night that Sleep Number has determined to be ideal for achieving a more restful sleep, including gently warming the bed ahead of bedtime to make it easier to drift off, and then cooling it once each user is asleep to help keep them comfortable.
The Trump administration’s plan to collect DNA evidence from migrants detained in U.S. Customs and Borders Protection (CBP) and Immigration and Customs Enforcement (ICE) facilities will commence soon in the form of a 90-day pilot program in Detroit and Southwest Texas, CNN reported on Monday.
News of the plan first emerged in October, when the Department of Homeland Security told reporters that it wanted to collect DNA from migrants to detect “fraudulent family units,” including refugees applying for asylum at U.S. ports of entry. ICE started using DNA tests to screen asylum seekers at the border last year over similar concerns, claiming that the tests were designed to fight human traffickers. The tests will apply to those detained both temporarily and for longer periods of time, covering nearly all people held by immigration officials.
DHS announced the pilot program in a privacy assessment posted to its website on Monday. Per CNN, the pilot is a legal necessity before the agency revokes rules enacted in 2010 that exempt “migrants in who weren’t facing criminal charges or those pending deportation proceedings” from the DNA Fingerprint Act of 2005, which will apply the program nationally. The pilot will involve U.S. Border Patrol agents collecting DNA from individuals aged 14-79 who are arrested and processed, as well as customs officers collecting DNA from individuals subject to continued detention or further proceedings.
According to the privacy assessment, U.S. citizens and permanent residents “who are being arrested or facing criminal charges” may have DNA collected by CBP or ICE personnel. All collected DNA will be sent to the FBI and stored in its Combined DNA Index System (CODIS), a set of national genetic information databases that includes forensic data, missing persons, and convicts, where it would be held for as long as the government sees fit.
Those who refuse to submit to DNA testing could face class A misdemeanor charges, the DHS wrote.
DHS acknowledged that because it has to mail the DNA samples to the FBI for processing and comparison against CODIS entries, it is unlikely that agents will be able to use the DNA for “public safety or investigative purposes prior to either an individual’s removal to his or her home country, release into the interior of the United States, or transfer to another federal agency.” ACLU attorney Stephen Kang told the New York Times that DHS appeared to be creating “DNA bank of immigrants that have come through custody for no clear reason,” raising “a lot of very serious, practical concerns, I think, and real questions about coercion.”
The Times noted that last year, Border Patrol law enforcement directorate chief Brian Hastings wrote that even after policies and procedures were implemented, Border Patrol agents remained “not currently trained on DNA collection measures, health and safety precautions, or the appropriate handling of DNA samples for processing.”
U.S. immigration authorities held a record number of children over the fiscal year that ended in September 2019, with some 76,020 minors without their parents present detained. According to ICE, over 41,000 people were in DHS custody at the end of 2019 (in mid-2019, the number shot to over 55,000).
“That kind of mass collection alters the purpose of DNA collection from one of criminal investigation basically to population surveillance, which is basically contrary to our basic notions of a free, trusting, autonomous society,” ACLU Speech, Privacy, and Technology Project staff attorney Vera Eidelman told the Times last year.
Expert human pathologists typically require around 30 minutes to diagnose brain tumors from tissue samples extracted during surgery. A new artificially intelligent system can do it in less than 150 seconds—and it does so more accurately than its human counterparts.
New research published today in Nature Medicine describes a novel diagnostic technique that leverages the power of artificial intelligence with an advanced optical imaging technique. The system can perform rapid and accurate diagnoses of brain tumors in practically real time, while the patient is still on the operating table. In tests, the AI made diagnoses that were slightly more accurate than those made by human pathologists and in a fraction of the time. Excitingly, the new system could be used in settings where expert neurologists aren’t available, and it holds promise as a technique that could diagnose other forms of cancer as well.
[…]
New York University neuroscientist Daniel Orringer and his colleagues developed a diagnostic technique that combined a powerful new optical imaging technique, called stimulated Raman histology (SRH), with an artificially intelligent deep neural network. SRH uses scattered laser light to illuminate features not normally seen in standard imaging techniques
[…]
To create the deep neural network, the scientists trained the system on 2.5 million images taken from 415 patients. By the end of the training, the AI could categorize tissue into any of 13 common forms of brain tumors, such as malignant glioma, lymphoma, metastatic tumors, diffuse astrocytoma, and meningioma.
A clinical trial involving 278 brain tumor and epilepsy patients and three different medical institutions was then set up to test the efficacy of the system. SRH images were evaluated by either human experts or the AI. Looking at the results, the AI correctly identified the tumor 94.6 percent of the time, while the human neuropathologists were accurate 93.9 percent of the time. Interestingly, the errors made by humans were different than the errors made by the AI. This is actually good news, because it suggests the nature of the AI’s mistakes can be accounted for and corrected in the future, resulting in an even more accurate system, according to the authors.
“SRH will revolutionize the field of neuropathology by improving decision-making during surgery and providing expert-level assessment in the hospitals where trained neuropathologists are not available,” said Matija Snuderl, a co-author of the study and an associate professor at NYU Grossman School of Medicine, in the press release.
The most direct and strongest evidence for the accelerating universe with dark energy is provided by the distance measurements using type Ia supernovae (SN Ia) for the galaxies at high redshift. This result is based on the assumption that the corrected luminosity of SN Ia through the empirical standardization would not evolve with redshift.
New observations and analysis made by a team of astronomers at Yonsei University (Seoul, South Korea), together with their collaborators at Lyon University and KASI, show, however, that this key assumption is most likely in error. The team has performed very high-quality (signal-to-noise ratio ~175) spectroscopic observations to cover most of the reported nearby early-type host galaxies of SN Ia, from which they obtained the most direct and reliable measurements of population ages for these host galaxies. They find a significant correlation between SN luminosity and stellar population age at a 99.5 percent confidence level. As such, this is the most direct and stringent test ever made for the luminosity evolution of SN Ia. Since SN progenitors in host galaxies are getting younger with redshift (look-back time), this result inevitably indicates a serious systematic bias with redshift in SN cosmology. Taken at face values, the luminosity evolution of SN is significant enough to question the very existence of dark energy. When the luminosity evolution of SN is properly taken into account, the team found that the evidence for the existence of dark energy simply goes away (see Figure 1).
Commenting on the result, Prof. Young-Wook Lee (Yonsei Univ., Seoul), who led the project said, “Quoting Carl Sagan, extraordinary claims require extraordinary evidence, but I am not sure we have such extraordinary evidence for dark energy. Our result illustrates that dark energy from SN cosmology, which led to the 2011 Nobel Prize in Physics, might be an artifact of a fragile and false assumption.”
Other cosmological probes, such as the cosmic microwave background (CMB) and baryonic acoustic oscillations (BAO), are also known to provide some indirect and “circumstantial” evidence for dark energy, but it was recently suggested that CMB from Planck mission no longer supports the concordance cosmological model which may require new physics (Di Valentino, Melchiorri, & Silk 2019). Some investigators have also shown that BAO and other low-redshift cosmological probes can be consistent with a non-accelerating universe without dark energy (see, for example, Tutusaus et al. 2017). In this respect, the present result showing the luminosity evolution mimicking dark energy in SN cosmology is crucial and very timely.
This result is reminiscent of the famous Tinsley-Sandage debate in the 1970s on luminosity evolution in observational cosmology, which led to the termination of the Sandage project originally designed to determine the fate of the universe.
This work based on the team’s 9-year effort at Las Campanas Observatory 2.5-m telescope and at MMT 6.5-m telescope was presented at the 235th meeting of the American Astronomical Society held in Honolulu on January 5th (2:50 PM in cosmology session, presentation No. 153.05). Their paper is also accepted for publication in the Astrophysical Journal and will be published in January 2020 issue.
Now, some researchers have focused on the immune response, inducing it at the site of the tumor. And they do so by a remarkably simple method: injecting the tumor with the flu vaccine. As a bonus, the mice it was tested on were successfully immunized, too.
Revving up the immune system
This is one of those ideas that seems nuts but had so many earlier results pointing toward it working that it was really just a matter of time before someone tried it. To understand it, you have to overcome the idea that the immune system is always diffuse, composed of cells that wander the blood stream. Instead, immune cells organize at the sites of infections (or tumors), where they communicate with each other to both organize an attack and limit that attack so that healthy tissue isn’t also targeted.
From this perspective, the immune system’s inability to eliminate tumor cells isn’t only the product of their similarities to healthy cells. It’s also the product of the signaling networks that help restrain the immune system to prevent it from attacking normal cells. A number of recently developed drugs help release this self-imposed limit, winning their developers Nobel Prizes in the process. These drugs convert a “cold” immune response, dominated by signaling that shuts things down, into a “hot” one that is able to attack a tumor.
[…]
To check whether something similar might be happening in humans, the researchers identified over 30,000 people being treated for lung cancer and found those who also received an influenza diagnosis. You might expect that the combination of the flu and cancer would be very difficult for those patients, but instead, they had lower mortality than the patients who didn’t get the flu.
[…]
the researchers obtained this year’s flu vaccine and injected it into the sites of tumors. Not only was tumor growth slowed, but the mice ended up immune to the flu virus.
Oddly, this wasn’t true for every flu vaccine. Some vaccines contain chemicals that enhance the immune system’s memory, promoting the formation of a long-term response to pathogens (called adjuvants). When a vaccine containing one of these chemicals was used, the immune system wasn’t stimulated to limit the tumors’ growth.
This suggests that it’s less a matter of stimulating the immune system and more an issue of triggering it to attack immediately. But this is one of the things that will need to be sorted out with further study.
An explosive leak of tens of thousands of documents from the defunct data firm Cambridge Analytica is set to expose the inner workings of the company that collapsed after the Observer revealed it had misappropriated 87 million Facebook profiles.
More than 100,000 documents relating to work in 68 countries that will lay bare the global infrastructure of an operation used to manipulate voters on “an industrial scale” are set to be released over the next months.
It comes as Christopher Steele, the ex-head of MI6’s Russia desk and the intelligence expert behind the so-called “Steele dossier” into Trump’s relationship with Russia, said that while the company had closed down, the failure to properly punish bad actors meant that the prospects for manipulation of the US election this year were even worse.
The release of documents began on New Year’s Day on an anonymous Twitter account, @HindsightFiles, with links to material on elections in Malaysia, Kenya and Brazil. The documents were revealed to have come from Brittany Kaiser, an ex-Cambridge Analytica employee turned whistleblower, and to be the same ones subpoenaed by Robert Mueller’s investigation into Russian interference in the 2016 presidential election
The Trump administration will make it more difficult to export artificial intelligence software as of next week, part of a bid to keep sensitive technologies out of the hands of rival powers like China.
Under a new rule that goes into effect on Monday, companies that export certain types of geospatial imagery software from the United States must apply for a license to send it overseas except when it is being shipped to Canada.
The measure is the first to be finalized by the Commerce Department under a mandate from a 2018 law, which tasked the agency with writing rules to boost oversight of exports of sensitive technology to adversaries like China, for economic and security reasons.
Reuters first reported that the agency was finalizing a set of narrow rules to limit such exports in a boon to U.S. industry that feared a much tougher tougher crackdown on sales abroad.
Just in case you forgot about encryption products, clipper chips etc: US products were weakened with backdoors, which meant a) no-one wanted US products and b) there was a wildfire growth of non-US encryption products. So basically the US goal to limit cryptography failed and at a cost to US producers.
Instead of a rigid panel wrapped in fabric, Bosch’s Virtual Visor features an LCD panel that can be flipped down when the sun is hanging out on the horizon. The panel works alongside a camera that’s pointed at a driver’s face whose live video feed is processed using a custom trained AI to recognize facial features like the nose, mouth, and, most importantly, the eyes. The camera system should recognize shadows cast on the driver’s eyes, and it uses this ability to darken only the areas on the LCD visor where intense sunlight would be passing through and impairing a driver’s vision. The region of the visor that’s darkened is constantly changing based on both the vehicle and driver’s movements, but the rest should remain transparent to provide a less obstructed view of the road and other vehicles ahead.
The Virtual Visor actually started life as a side project for three of Bosch’s powertrain engineers who developed it in their free time and harvested the parts they needed from a discarded computer monitor. As to when the feature will start showing up as an option in new cars remains to be seen—if ever. If you’ve ever dropped your phone or poked at a screen too hard you’ve already aware of how fragile LCD panels can be, so there will need to be lots of in-vehicle testing before this ever goes mainstream. But it’s a clever innovation using technology that at this point is relatively cheap and readily available, so hopefully this is an upgrade that’s not too far away.
Soundbar and smart-speaker-flinger Sonos is starting the new year with the wrong kind of publicity.
Customers and netizens are protesting against its policy of deliberately rendering working systems unusable, which is bad for the environment as it sends devices prematurely to an electronic waste graveyard.
The policy is also hazardous for those who unknowingly purchase a disabled device on the second-hand market, or even for users who perhaps mistake “recycle” for “reset”.
The culprit is Sonos’s so-called “Trade Up Program” which gives customers a 30 per cent discount off a new device, provided they follow steps to place their existing hardware into “Recycle mode”. Sonos has explained that “when you recycle an eligible Sonos product, you are choosing to permanently deactivate it. Once you confirm you’d like to recycle your product, the decision cannot be reversed.” There is a 21-day countdown (giving you time to receive your shiny new hardware) and then it is useless, “even if the product has been reset to its factory settings.”
Sonos suggests taking the now useless gadget to a local e-waste recycling centre, or sending it back to Sonos, though it remarks that scrapping it locally is “more eco-friendly than shipping it to Sonos”. In fact, agreeing either to return it or to use a “certified electronics recycler” is part of the terms and conditions, though the obvious question is how well this is enforced or whether customers even notice this detail when participating in the scheme.
The truth of course is that no recycling option is eco-friendly in comparison to someone continuing to enjoy the device doing what it does best, which is to play music. Even if a user is conscientious about finding an electronic waste recycling centre, there is a human and environmental cost involved, and not all parts can be recycled.
Sonos has posted on the subject of sustainability and has a “director of sustainability”, Mark Heintz, making its “Trade Up” policy even harder to understand.
Why not allow these products to be resold or reused? Community manager Ryan S said: “While we’re proud of how long our products last, we don’t really want these old, second-hand products to be the first experience a new customer has with Sonos.”
While this makes perfect business sense for Sonos, it is a weak rationale from an environmental perspective. Reactions like this one on Twitter are common. “I’ve bought and recommended my last Sonos product. Please change your practice, at the very least be honest about it and don’t flash the sustainability card for something that’s clearly not.”
This is the second day of the new decade, and the world’s largest floating wind farm is already doing its damn thing and generating electricity.
Located off the coast of Portugal, the WindFloat Atlantic wind farm connected to the grid on New Year’s Eve. And this is only the first of the project’s three platforms. Once all go online, the floating wind farm will be able to produce enough energy for about 60,000 homes a year. Like many European countries (including Denmark and the UK), Portugal has been investing heavily in wind as a viable clean energy option.
If you know nothing else about particle accelerators, you probably know that they’re big — sometimes miles long. But a new approach from Stanford researchers has led to an accelerator shorter from end to end than a human hair is wide.
The general idea behind particle accelerators is that they’re a long line of radiation emitters that smack the target particle with radiation at the exact right time to propel it forward a little faster than before. The problem is that depending on the radiation you use and the speed and resultant energy you want to produce, these things can get real big, real fast.
That also limits their applications; you can’t exactly put a particle accelerator in your lab or clinic if they’re half a kilometer long and take megawatts to run. Something smaller could be useful, even if it was nowhere near those power levels — and that’s what these Stanford scientists set out to make.
“We want to miniaturize accelerator technology in a way that makes it a more accessible research tool,” explained project lead Jelena Vuckovic in a Stanford news release.
But this wasn’t designed like a traditional particle accelerator like the Large Hadron Collider or one at collaborator SLAC’s National Accelerator Laboratory. Instead of engineering it from the bottom up, they fed their requirements to an “inverse design algorithm” that produced the kind of energy pattern they needed from the infrared radiation emitters they wanted to use.
That’s partly because infrared radiation has a much shorter wavelength than something like microwaves, meaning the mechanisms themselves can be made much smaller — perhaps too small to adequately design the ordinary way.
The algorithm’s solution to the team’s requirements led to an unusual structure that looks more like a Rorschach test than a particle accelerator. But these blobs and channels are precisely contoured to guide infrared laser light pulse in such a way that they push electrons along the center up to a significant proportion of the speed of light.
The resulting “accelerator on a chip” is only a few dozen microns across, making it comfortably smaller than a human hair and more than possible to stack a few on the head of a pin. A couple thousand of them, really.
And it will take a couple thousand to get the electrons up to the energy levels needed to be useful — but don’t worry, that’s all part of the plan. The chips are fully integrated but can be put in a series easily to create longer assemblies that produce larger powers.
These won’t be rivaling macro-size accelerators like SLAC’s or the Large Hadron Collider, but they could be much more useful for research and clinical applications where planet-destroying power levels aren’t required. For instance, a chip-sized electron accelerator might be able to direct radiation into a tumor surgically rather than through the skin.
More than 1,000 celebrities, government employees and politicians who have received honours had their home and work addresses posted on a government website, the Guardian can reveal.
The accidental disclosure of the tranche of personal details is likely to be considered a significant security breach, particularly as senior police and Ministry of Defence staff were among those whose addresses were made public.
Many of the more than a dozen MoD employees and senior counter-terrorism officers who received honours in the new year list had their home addresses revealed in a downloadable list, along with countless others who may believe the disclosure has put them in a vulnerable position.
Others included Jonathan Jones, the permanent secretary of the government’s legal department, and John Manzoni, the Cabinet Office permanent secretary. Less well-known figures included academics, Holocaust survivors, prison staff and community and faith leaders.
It is thought the document seen by the Guardian, which contains the details of 1,097 people, went online at 10.30pm on Friday and was taken down in the early hours of Saturday.
The vast majority of people on the list had their house numbers, street names and postcodes included.
Security camera startup Wyze has confirmed it suffered a data leak this month that may have left the personal information of millions of its customers exposed on the internet. No passwords or financial information were exposed, but email addresses, Wi-Fi network IDs and body metrics were left unprotected from Dec. 4 through Dec. 26, the company said Friday.
More than 2.4 million Wyze customers were affected by the leak, according to cybersecurity firm Twelve Security, which first reported on the leak
The data was accidentally left exposed when it was transferred to a new database to make the data easier to query, but a company employee failed to maintain security protocols during the process, Wyze co-founder Dongsheng Song wrote in a forum post.
“We are still looking into this event to figure out why and how this happened,” he wrote.
In an update Sunday, Song said Wyze discovered a second unprotected database during its investigation of the data leak. It’s unclear what information was stored in this database, but Song said passwords and personal financial data weren’t included.