Checkpeople, why is a 22GB database containing 56 million US folks’ aggregated personal details sitting on the open internet using a Chinese IP address?

A database containing the personal details of 56.25m US residents – from names and home addresses to phone numbers and ages – has been found on the public internet, served from a computer with a Chinese IP address, bizarrely enough.

The information silo appears to belong to Florida-based CheckPeople.com, which is a typical people-finder website: for a fee, you can enter someone’s name, and it will look up their current and past addresses, phone numbers, email addresses, names of relatives, and even criminal records in some cases, all presumably gathered from public records.

However, all of this information is not only sitting in one place for spammers, miscreants, and other netizens to download in bulk, but it’s being served from an IP address associated with Alibaba’s web hosting wing in Hangzhou, east China, for reasons unknown. It’s a perfect illustration that not only is this sort of personal information in circulation, but it’s also in the hands of foreign adversaries.

It just goes to show how haphazardly people’s privacy is treated these days.

A white-hat hacker operating under the handle Lynx discovered the trove online, and tipped off The Register. He told us he found the 22GB database exposed on the internet, including metadata that links the collection to CheckPeople.com. We have withheld further details of the security blunder for privacy protection reasons.

The repository’s contents are likely scraped from public records, though together provide rather detailed profiles on tens of millions of folks in America. Basically, CheckPeople.com has done the hard work of aggregating public personal records, and this exposed NoSQL database makes that info even easier to crawl and process.

Source: Why is a 22GB database containing 56 million US folks’ personal details sitting on the open internet using a Chinese IP address? Seriously, why? • The Register

FBI Surveillance Vendor Threatens to Sue Tech Reporters for Heinous Crime of Reporting on tombstones, tree stumps and vacuum cleaners they sell with spy cams in them

Motherboard on Thursday revealed that a “secretive” U.S. government vendor whose surveillance products are not publicly advertised has been marketing hidden cameras disguised as seemingly ordinary objects—vacuum cleaners, tree stumps, and tombstones—to the Federal Bureau of Investigation, among other law enforcement agencies, and the military, in addition to, ahem, “select clients.”

Yes, that’s tombstone cams, because absolutely nothing in this world is sacred.

Illustration for article titled FBI Surveillance Vendor Threatens to Sue Tech Reporters for Heinous Crime of Doing Journalism
Screenshot: Motherboard

 

The vendor, Special Services Group (SSG), was apparently none too pleased when Motherboard revealed that it planned to publish photographs and descriptions of the company’s surveillance toys. When reached for comment, SSG reportedly threatened to sue the tech publication, launched by VICE in 2009.

According to Motherboard, a brochure listing SSG’s products (starting at link from page 93) was obtained through public records requests filed with the Irvine Police Department in California.

Freddy Martinez, a policy analyst at government accountability group Open The Government, and Beryl Lipton, a reporter/researcher at the government transparency nonprofit MuckRock, both filed requests and obtained the SSG brochure, Motherboard said.

In warning the site not to disclose the brochure, SSG’s attorney reportedly claimed the document is protected under the International Traffic in Arms Regulations (ITAR), though the notice did not point to any specific section of the law, which was enacted to regulate arms exports at the height of the Cold War.

ITAR does prohibit the public disclosure of certain technical data related to military munitions. It’s unlikely, however, that a camera designed to look like a baby car seat—an actual SSG product called a “Rapid Vehicle Deployment Kit”—is covered under the law, which encompasses a wide range of actual military equipment that can’t be replicated in a home garage, such as space launch vehicles, nuclear reactors, and anti-helicopter mines.

ITAR explicitly does not cover “basic marketing information” or information “generally accessible or available to the public.”

Source: FBI Surveillance Vendor Threatens to Sue Tech Reporters for Heinous Crime of Doing Journalism

Lawsuit against cinema for refusing cash – and thus slurping private data

Michiel Jonker from Arnhem has sued a cinema that has moved location and since then refuses to accept cash at the cash register. All payments have to be made by pin. Jonker feels that this forces visitors to allow the cinema to process personal data.

He tried something of the sort in 2018 which was turned down as the personal data authority in NL decided that no-one was required to accept cash as legal tender.

Jonker is now saying that it should be if the data can be used to profile his movie preferences afterwards.

Good luck to him, I agree that cash is legal tender and the move to a cash free society is a privacy nightmare and potentially disastrous – see Hong Kong, for example.

Source: Rechtszaak tegen weigering van contant geld door bioscoop – Emerce

A Closer Look Into Neon and Its Artificial Humans

In short, a Neon is an artificial intelligence in the vein of Halo’s Cortana or Red Dwarf’s Holly, a computer-generated life form that can think and learn on its own, control its own virtual body, has a unique personality, and retains its own set of memories, or at least that’s the goal. A Neon doesn’t have a physical body (aside from the processor and computer components that its software runs on), so in a way, you can sort of think of a Neon as a cyberbrain from Ghost in the Shell too. Mistry describes Neon as a way to discover the “soul of tech.”

Here’s a look at three Neons, two of which were part of Mistry’s announcement presentation at CES.
Here’s a look at three Neons, two of which were part of Mistry’s announcement presentation at CES.
Graphic: Neon

Whatever.

But unlike a lot of the AIs we interact with today, like Siri and Alexa, Neon’s aren’t digital assistants. They weren’t created specifically to help humans and they aren’t supposed to be all-knowing. They are fallible and have emotions, possibly even free will, and presumably, they have the potential to die. Though that last one isn’t quite clear.

OK, but those things look A LOT like humans. What’s the deal?

That’s because Neons were originally modeled on humans. The company used computers to record different people’s faces, expressions, and bodies, and then all that info was rolled into a platform called Core R3, which forms the basis of how Neons appear to look, move, and react so naturally.

Mistry showed how Neon starting out by recording human movements, before transitioning to have Neon’s Core R3 engine generate animations on its own.
Mistry showed how Neon starting out by recording human movements, before transitioning to have Neon’s Core R3 engine generate animations on its own.
Photo: Sam Rutherford (Gizmodo)

If you break it down even further, the three Rs in Core R3 stand for reality, realtime, and responsiveness, each R representing a major tenet for what defines a Neon. Reality is meant to show that a Neon is it’s own thing, and not simply a copy or motion capture footage from an actor or something else. Realtime is supposed to signify that a Neon isn’t just a preprogrammed line of code, scripted to perform a certain task without variation like you would get from a robot. Finally, the part about responsiveness represents that Neons, like humans, can react to stimuli, with Mistry claiming latency as low as a few milliseconds.

Whoo, that’s quite a doozy. Is that it?

Illustration for article titled WTF Is an Artificial Human and Where Did They Come From?
Photo: Sam Rutherford (Gizmodo)

Oh, I see, a computer-generated human simulation with emotions, free will, and the ability to die isn’t enough for you? Well, there’s also Spectra, which is Neon’s (the company) learning platform that’s designed to teach Neons (the artificial humans) how to learn new skills, develop emotions, retain memories, and more. It’s the other half of the puzzle. Core R3 is responsible for the look, mannerisms, and animations of a Neon’s general appearance, including their voice. Spectra is responsible for a Neon’s personality and intelligence.

Oh yeah, did we mention they can talk too?

So is Neon Skynet?

Yes. No. Maybe. It’s too early to tell.

That all sounds nice, but what actually happened at Neon’s CES presentation?

After explaining the concept behind Neon’s artificial humans and how the company started off creating their appearance by recording and modeling humans, Mistry showed how after becoming adequately sophisticated, Core R3 engine allows a Neon to animate a realistic-looking avatar on its own.

From left to right, meet Karen, Cathy, and Maya.
From left to right, meet Karen, Cathy, and Maya.
Photo: Sam Rutherford (Gizmodo)

Then, Mistry and another Neon employee attempted to present a live demo of a Neon’s abilities, which is sort of when things went awry. To Neon’s credit, Mistry did preface everything by saying the tech is still very early, and given the complexity of the task and issues with doing a live demo at CES, it’s not really a surprise the Neon team ran into technical difficulties.

At first, the demo went smooth, as Mistry introduced three Neons whose avatars were displayed in a row of nearby displays: Karen, an airline worker, Cathy, a yoga instructor, and Maya, a student. From there, each Neon was commanded to perform various things like laugh, smile, and talk, through controls on a nearby tablet. To be clear, in this case, the Neons weren’t moving on their own but were manually controlled to demonstrate the lifelike mannerisms.

If you’re thinking a digital version of the creepy Sophia-bot you’re not far off.

For the most part, each Neon did appear quite realistic, avoiding nearly all the awkwardness you get from even high-quality CGI like the kind Disney used animate young Princess Leia in recent Star Wars movies. In fact, when the Neons were asked to move and laugh, the crowd at Neon’s booth let out a small murmur of shock and awe (and maybe fear).

From there, Mistry introduced a fourth Neon along with a visualization of the Neon’s neural network, which is essentially an image of its brain. And after getting the Neon to talk in English, Chinese, and Korean (which sounded a bit robotic and less natural than what you’d hear from Alexa or the Google Assistant), Mistry attempted to demo even more actions. But that’s when the demo seemed to freeze, with the Neon not responding properly to commands.

Illustration for article titled WTF Is an Artificial Human and Where Did They Come From?

At this point, Mistry apologized to the crowd and promised that the team would work on fixings things so it could run through more in-depth demos later this week. I’m hoping to revisit the Neon booth to see if that’s the case, so stay tuned for potential updates.

So what’s the actual product? There’s a product, right?

Yes, or at least there will be eventually. Right now, even in such an early state, Mistry said he just wanted to share his work with the world. However, sometime near the end of 2020, Neon plans to launch a beta version of the Neon software at Neon World 2020, a convention dedicated to all things Neon. This software will feature Core R3 and will allow users to tinker with making their own Neons, while Neon the company continues to work on developing its Spectra software to give Neon’s life and emotion.

How much will Neon cost? What is Neon’s business model?

Supposedly there isn’t one. Mistry says that instead of worrying about how to make money, he just wants Neon to “make a positive impact.” That said, Mistry also mentioned that Neon (the platform) would be made available to business partners, who may be able to tweak the Neon software to sell things or serve in call centers or something. The bottom line is this: If Neon can pull off what it’s aiming to pull off, there would be a healthy business in replacing countless service workers.

Can I fuck a Neon?

Neons are going to be our friends.
Neons are going to be our friends.
Photo: Sam Rutherford (Gizmodo)

Get your mind out of the gutter. But at some point, probably yes. Everything we do eventually comes around to sex, right? Furthermore, this does bring up some interesting concerns about consent.

How can I learn more?

Go to Neon.life.

Really?

Really.

So what happens next?

Neon are going to Neon, I don’t know. I’m a messenger trying to explain the latest chapter of CES quackery. Don’t get me wrong, the idea behind Neon is super interesting and is something sci-fi writers have been writing about for decades. But for right now, it’s not even clear how legit all this is.

Here are some of the core building blocks of Neon’s software.
Here are some of the core building blocks of Neon’s software.
Photo: Sam Rutherford (Gizmodo)

It’s unclear how much a Neon can do on its own, and how long it will take for Neon to live up to its goal of creating a truly independent artificial human. What is really real? It’s weird, ambitious, and could be the start of a new era in human development. For now? It’s still quackery.

Source: A Closer Look Into Neon and Its Artificial Humans

Amazon fired four workers who secretly snooped on Ring doorbell camera footage

Amazon’s Ring home security camera biz says it has fired multiple employees caught covertly watching video feeds from customer devices.

The admission came in a letter [PDF] sent in response to questions raised by US Senators critical of Ring’s privacy practices.

Ring recounted how, on four separate occasions, workers were let go for overstepping their access privileges and poring over customer video files and other data inappropriately.

“Over the last four years, Ring has received four complaints or inquiries regarding a team member’s access to Ring video data,” the gizmo flinger wrote.

“Although each of the individuals involved in these incidents was authorized to view video data, the attempted access to that data exceeded what was necessary for their job functions.

“In each instance, once Ring was made aware of the alleged conduct, Ring promptly investigated the incident, and after determining that the individual violated company policy, terminated the individual.”

This comes as Amazon attempts to justify its internal policies, particularly employee access to user video information for support and research-and-development purposes.

Source: Ring of fired: Amazon axes four workers who secretly snooped on netizens’ surveillance camera footage • The Register

Delta and Misapplied Sciences introduce parallel reality – a display that shows different content to different people at the same time without augmentation

In a ritual I’ve undertaken at least a thousand times, I lift my head to consult an airport display and determine which gate my plane will depart from. Normally, that involves skimming through a sprawling list of flights to places I’m not going. This time, however, all I see is information meant just for me:

Hello Harry
Flight DL42 to SEA boards in 33 min
Gate C11, 16 min walk
Proceed to Checkpoint 2

Stranger still, a leather-jacketed guy standing next to me is looking at the same display at the same time—and all he sees is his own travel information:

Hello Albert
Flight DL11 to ATL boards in 47 min
Gate C26, 25 min walk
Proceed to Checkpoint 4

Okay, confession time: I’m not at an airport. Instead, I’m visiting the office of Misapplied Sciences, a Redmond, Washington, startup located in a dinky strip mall whose other tenants include a teppanyaki joint and a children’s hair salon. Albert is not another traveler but rather the company’s cofounder and CEO, Albert Ng. We’ve been play-acting our way through a demo of the company’s display, which can show different things to different people at one time—no special glasses, smartphone-camera trickery, or other intermediary technology required. The company calls it parallel reality.

The simulated airport terminal is only one of the scenarios that Ng and his cofounder Dave Thompson show off for me in their headquarters. They also set up a mock store with a Pikachu doll, a Katy Perry CD, a James Bond DVD, and other goods, all in front of one screen. When I glance up at it, I see video related to whichever item I’m standing near. In a makeshift movie theater, I watch The Sound of Music with closed captions in English on a display above the movie screen, while Ng sits one seat over and sees Chinese captions on the same display. And I flick a wand to control colored lights on Seattle’s Space Needle (or for the sake of the demo, a large poster of it).

At one point, just to definitively prove that their screen can show multiple images at once, Ng and Thompson push a grid of mirrors up in front of it. Even though they’re all reflecting the same screen, each shows an animated sequence based on the flag or map of a different country.
[…]
The potential applications for the technology—from outdoor advertising to traffic signs to theme-park entertainment—are many. But if all goes according to plan, the first consumers who will see it in action will be travelers at the Detroit Metropolitan Airport. Starting in the middle of this year, Delta Air Lines plans to offer parallel-reality signage, located just past TSA, that can simultaneously show almost 100 customers unique information on their flights, once they’ve scanned their boarding passes. Available in English, Spanish, Japanese, Korean, and other languages, it will be a slicked-up, real-world deployment of the demo I got in Redmond.
[…]

At a January 2014 hackathon, a researcher named Paul Dietz came up with an idea to synchronize crowds in stadiums via a smartphone app that gave individual spectators cues to stand up, sit down, or hold up a card. The idea was to “use people as pixels,” he says, by turning the entire audience into a giant, human-powered animated display. It worked. “But the participants complained that they were so busy looking at their phones, they couldn’t enjoy the effect,” Dietz remembers.

That led him to wonder if there was a more elegant way to signal individuals in a crowd, such as beaming different colors to different people. As part of this investigation, he set up a pocket projector in an atrium and projected stripes of red and green. “The projector was very dim,” he says. “But when I looked into it from across the atrium, it was this beautiful, bright, saturated green light. Then I moved over a few inches into a red stripe, and then it looked like an intense red light.”

Based on this discovery, Dietz concluded that it might be possible to create displays that precisely aimed differing images at people depending on their position. Later in 2014, that epiphany gave birth to Misapplied Sciences, which he cofounded with Ng—who’d been his Microsoft intern while studying high-performance computing at Stanford—and Thompson, whom Dietz had met when both were creating theme-park experiences at Walt Disney Imagineering.

[…]

the basic principle—directing different colors in different directions—remains the same. With garden-variety screens, the whole idea is to create a consistent picture, and the wider the viewing angle, the better. By contrast, with Misapplied’s displays, “at one time, a single pixel can emit green light towards you,” says Ng. “Whereas simultaneously that same pixel can emit red light to the person next to you.”

The parallel-reality effect is all in the pixels. [Image: courtesy of Misapplied Sciences]

In one version of the tech, it can control the display in 18,000 directions; in another, meant for large-scale outdoor signage, it can control it in a million. The company has engineered display modules that can be arranged, Lego-like, in different configurations that allow for signage of varying sizes and shapes. A Windows PC performs the heavy computational lifting, and there’s software that lets a user assign different images to different viewing positions by pointing and clicking. As displays reach the market, Ng says that the price will “rival that of advanced LED video walls.” Not cheap, maybe, but also not impossibly stratospheric.

For all its science-fiction feel, parallel reality does have its gotchas, at least in its current incarnation. In the demos I saw, the pixels were blocky, with a noticeable amount of space around them—plus black bezels around the modules that make up a sign—giving the displays a look reminiscent of a sporting-arena electronic sign from a few generations back. They’re also capable of generating only 256 colors, so photos and videos aren’t exactly hyperrealistic. Perhaps the biggest wrinkle is that you need to stand at least 15 feet back for the parallel-reality effect to work. (Venture too close, and you see one mishmashed image.)

[…]

The other part of the equation is figuring out which traveler is standing where, so people see their own flight details. Delta is accomplishing that with a bit of AI software and some ceiling-mounted cameras. When you scan your boarding pass, you get associated with your flight info—not through facial recognition, but simply as a discrete blob in the cameras’ view. As you roam near the parallel-reality display, the software keeps tabs on your location, so that the signage can point your information at your precise spot.

Delta is taking pains to alleviate any privacy concerns relating to this system. “It’s all going to be housed on Delta systems and Delta software, and it’s always going to be opt-in,” says Robbie Schaefer, general manager of Delta’s airport customer experience. The software won’t store anything once a customer moves on, and the display won’t display any highly sensitive information. (It’s possible to steal a peek at other people’s displays, but only by invading their personal space—which is what I did to Ng, at his invitation, to see for myself.)

The other demos I witnessed at Misapplied’s office involved less tracking of individuals and handling of their personal data. In the retail-store scenario, for instance, all that mattered was which product I was standing in front of. And in the captioning one, the display only needed to know what language to display for each seat, which involved audience members using a smartphone app to scan a QR code on their seat and then select a language.

Source: Delta and Misapplied Sciences introduce parallel reality

A Bed That Cools and Heats Each Sleeper Separately, sets the softness per side and also adjusts automatically to silence snorers

Sleep Number first made a name for itself with its line of adjustable air-filled mattresses that allowed a pair of sleepers to each select how firm or soft they wanted their side of the bed to be. The preferred setting was known as a user’s Sleep Number, and over the years the company has introduced many ways to make it easier to fine-tune its beds for a good night’s sleep, including its smart SleepIQ technology which tracks movements and breathing patterns to help narrow down which comfort settings are ideal, as well as automatic adjustments in the middle of the night to silence a snorer.

At CES 2017, the company’s Sleep Number 360 bed introduced a new feature that learned each user’s bedtime routines and then automatically pre-heated the foot of the bed to a specific temperature to make falling asleep easier and more comfortable. At CES 2020, the company is now expanding on that idea with its new Climate360 smart bed that can heat and cool the entire mattress based on each user’s dozing preferences.

Using a combination of sensors, advanced textiles, phase change materials (a material that can absorb or release energy to aid in heating and cooling), evaporative cooling, and a ventilation system, the Climate360 bed can supposedly create and maintain a separate microclimate on each side of the bed, and make adjustments throughout the night based on each sleeper’s movements which indicate a level of discomfort. What isn’t built into the bed is a full air conditioning system, however, so the bed can only cool each side by about 12 degrees, but is able to warm them up to 100 degrees Fahrenheit if you prefer to sleep in an inferno.

The Climate360 bed goes through automatic routines throughout the night that Sleep Number has determined to be ideal for achieving a more restful sleep, including gently warming the bed ahead of bedtime to make it easier to drift off, and then cooling it once each user is asleep to help keep them comfortable.

Source: A Bed That Cools and Heats Each Sleeper Separately Will Save Countless Relationships

DHS Plan to Collect DNA From Migrant Detainees Will Begin Soon – because centralised databases with personally sensitive data in them are a great idea. Just ask the Jews how useful they were during WWII

The Trump administration’s plan to collect DNA evidence from migrants detained in U.S. Customs and Borders Protection (CBP) and Immigration and Customs Enforcement (ICE) facilities will commence soon in the form of a 90-day pilot program in Detroit and Southwest Texas, CNN reported on Monday.

News of the plan first emerged in October, when the Department of Homeland Security told reporters that it wanted to collect DNA from migrants to detect “fraudulent family units,” including refugees applying for asylum at U.S. ports of entry. ICE started using DNA tests to screen asylum seekers at the border last year over similar concerns, claiming that the tests were designed to fight human traffickers. The tests will apply to those detained both temporarily and for longer periods of time, covering nearly all people held by immigration officials.

DHS announced the pilot program in a privacy assessment posted to its website on Monday. Per CNN, the pilot is a legal necessity before the agency revokes rules enacted in 2010 that exempt “migrants in who weren’t facing criminal charges or those pending deportation proceedings” from the DNA Fingerprint Act of 2005, which will apply the program nationally. The pilot will involve U.S. Border Patrol agents collecting DNA from individuals aged 14-79 who are arrested and processed, as well as customs officers collecting DNA from individuals subject to continued detention or further proceedings.

According to the privacy assessment, U.S. citizens and permanent residents “who are being arrested or facing criminal charges” may have DNA collected by CBP or ICE personnel. All collected DNA will be sent to the FBI and stored in its Combined DNA Index System (CODIS), a set of national genetic information databases that includes forensic data, missing persons, and convicts, where it would be held for as long as the government sees fit.

Those who refuse to submit to DNA testing could face class A misdemeanor charges, the DHS wrote.

DHS acknowledged that because it has to mail the DNA samples to the FBI for processing and comparison against CODIS entries, it is unlikely that agents will be able to use the DNA for “public safety or investigative purposes prior to either an individual’s removal to his or her home country, release into the interior of the United States, or transfer to another federal agency.” ACLU attorney Stephen Kang told the New York Times that DHS appeared to be creating “DNA bank of immigrants that have come through custody for no clear reason,” raising “a lot of very serious, practical concerns, I think, and real questions about coercion.”

The Times noted that last year, Border Patrol law enforcement directorate chief Brian Hastings wrote that even after policies and procedures were implemented, Border Patrol agents remained “not currently trained on DNA collection measures, health and safety precautions, or the appropriate handling of DNA samples for processing.”

U.S. immigration authorities held a record number of children over the fiscal year that ended in September 2019, with some 76,020 minors without their parents present detained. According to ICE, over 41,000 people were in DHS custody at the end of 2019 (in mid-2019, the number shot to over 55,000).

“That kind of mass collection alters the purpose of DNA collection from one of criminal investigation basically to population surveillance, which is basically contrary to our basic notions of a free, trusting, autonomous society,” ACLU Speech, Privacy, and Technology Project staff attorney Vera Eidelman told the Times last year.

Source: DHS Plan to Collect DNA From Migrant Detainees Will Begin Soon

During Brain Surgery, This AI Can Diagnose a Tumor in 2 Minutes

Expert human pathologists typically require around 30 minutes to diagnose brain tumors from tissue samples extracted during surgery. A new artificially intelligent system can do it in less than 150 seconds—and it does so more accurately than its human counterparts.

New research published today in Nature Medicine describes a novel diagnostic technique that leverages the power of artificial intelligence with an advanced optical imaging technique. The system can perform rapid and accurate diagnoses of brain tumors in practically real time, while the patient is still on the operating table. In tests, the AI made diagnoses that were slightly more accurate than those made by human pathologists and in a fraction of the time. Excitingly, the new system could be used in settings where expert neurologists aren’t available, and it holds promise as a technique that could diagnose other forms of cancer as well.

[…]

New York University neuroscientist Daniel Orringer and his colleagues developed a diagnostic technique that combined a powerful new optical imaging technique, called stimulated Raman histology (SRH), with an artificially intelligent deep neural network. SRH uses scattered laser light to illuminate features not normally seen in standard imaging techniques

[…]

To create the deep neural network, the scientists trained the system on 2.5 million images taken from 415 patients. By the end of the training, the AI could categorize tissue into any of 13 common forms of brain tumors, such as malignant glioma, lymphoma, metastatic tumors, diffuse astrocytoma, and meningioma.

A clinical trial involving 278 brain tumor and epilepsy patients and three different medical institutions was then set up to test the efficacy of the system. SRH images were evaluated by either human experts or the AI. Looking at the results, the AI correctly identified the tumor 94.6 percent of the time, while the human neuropathologists were accurate 93.9 percent of the time. Interestingly, the errors made by humans were different than the errors made by the AI. This is actually good news, because it suggests the nature of the AI’s mistakes can be accounted for and corrected in the future, resulting in an even more accurate system, according to the authors.

“SRH will revolutionize the field of neuropathology by improving decision-making during surgery and providing expert-level assessment in the hospitals where trained neuropathologists are not available,” said Matija Snuderl, a co-author of the study and an associate professor at NYU Grossman School of Medicine, in the press release.

Source: During Brain Surgery, This AI Can Diagnose a Tumor in 2 Minutes

New evidence shows that the key assumption made in the discovery of dark energy is in error

The most direct and strongest evidence for the accelerating universe with dark energy is provided by the distance measurements using type Ia supernovae (SN Ia) for the galaxies at high redshift. This result is based on the assumption that the corrected luminosity of SN Ia through the empirical standardization would not evolve with redshift.

New observations and analysis made by a team of astronomers at Yonsei University (Seoul, South Korea), together with their collaborators at Lyon University and KASI, show, however, that this key assumption is most likely in error. The team has performed very high-quality (signal-to- ~175) spectroscopic observations to cover most of the reported nearby early-type host galaxies of SN Ia, from which they obtained the most direct and reliable measurements of population ages for these host galaxies. They find a significant correlation between SN and stellar population age at a 99.5 percent confidence level. As such, this is the most direct and stringent test ever made for the luminosity evolution of SN Ia. Since SN progenitors in host galaxies are getting younger with redshift (look-back time), this result inevitably indicates a serious systematic bias with redshift in SN cosmology. Taken at face values, the luminosity evolution of SN is significant enough to question the very existence of . When the luminosity evolution of SN is properly taken into account, the team found that the evidence for the existence of dark simply goes away (see Figure 1).

Commenting on the result, Prof. Young-Wook Lee (Yonsei Univ., Seoul), who led the project said, “Quoting Carl Sagan, extraordinary claims require extraordinary evidence, but I am not sure we have such extraordinary evidence for dark energy. Our result illustrates that dark energy from SN cosmology, which led to the 2011 Nobel Prize in Physics, might be an artifact of a fragile and false assumption.”

Other cosmological probes, such as the (CMB) and baryonic acoustic oscillations (BAO), are also known to provide some indirect and “circumstantial” evidence for dark energy, but it was recently suggested that CMB from Planck mission no longer supports the concordance cosmological model which may require new physics (Di Valentino, Melchiorri, & Silk 2019). Some investigators have also shown that BAO and other low-redshift cosmological probes can be consistent with a non-accelerating universe without dark energy (see, for example, Tutusaus et al. 2017). In this respect, the present result showing the luminosity evolution mimicking dark energy in SN cosmology is crucial and very timely.

This result is reminiscent of the famous Tinsley-Sandage debate in the 1970s on luminosity evolution in observational cosmology, which led to the termination of the Sandage project originally designed to determine the fate of the universe.

This work based on the team’s 9-year effort at Las Campanas Observatory 2.5-m telescope and at MMT 6.5-m telescope was presented at the 235th meeting of the American Astronomical Society held in Honolulu on January 5th (2:50 PM in cosmology session, presentation No. 153.05). Their paper is also accepted for publication in the Astrophysical Journal and will be published in January 2020 issue.

Source: New evidence shows that the key assumption made in the discovery of dark energy is in error

Injecting the flu vaccine into a tumor gets the immune system to attack it

Now, some researchers have focused on the immune response, inducing it at the site of the tumor. And they do so by a remarkably simple method: injecting the tumor with the flu vaccine. As a bonus, the mice it was tested on were successfully immunized, too.

Revving up the immune system

This is one of those ideas that seems nuts but had so many earlier results pointing toward it working that it was really just a matter of time before someone tried it. To understand it, you have to overcome the idea that the immune system is always diffuse, composed of cells that wander the blood stream. Instead, immune cells organize at the sites of infections (or tumors), where they communicate with each other to both organize an attack and limit that attack so that healthy tissue isn’t also targeted.

From this perspective, the immune system’s inability to eliminate tumor cells isn’t only the product of their similarities to healthy cells. It’s also the product of the signaling networks that help restrain the immune system to prevent it from attacking normal cells. A number of recently developed drugs help release this self-imposed limit, winning their developers Nobel Prizes in the process. These drugs convert a “cold” immune response, dominated by signaling that shuts things down, into a “hot” one that is able to attack a tumor.

[…]

To check whether something similar might be happening in humans, the researchers identified over 30,000 people being treated for lung cancer and found those who also received an influenza diagnosis. You might expect that the combination of the flu and cancer would be very difficult for those patients, but instead, they had lower mortality than the patients who didn’t get the flu.

[…]

the researchers obtained this year’s flu vaccine and injected it into the sites of tumors. Not only was tumor growth slowed, but the mice ended up immune to the flu virus.

Oddly, this wasn’t true for every flu vaccine. Some vaccines contain chemicals that enhance the immune system’s memory, promoting the formation of a long-term response to pathogens (called adjuvants). When a vaccine containing one of these chemicals was used, the immune system wasn’t stimulated to limit the tumors’ growth.

This suggests that it’s less a matter of stimulating the immune system and more an issue of triggering it to attack immediately. But this is one of the things that will need to be sorted out with further study.

[…]

Source: Injecting the flu vaccine into a tumor gets the immune system to attack it | Ars Technica

Fresh Cambridge Analytica leak ‘shows global manipulation is out of control’

An explosive leak of tens of thousands of documents from the defunct data firm Cambridge Analytica is set to expose the inner workings of the company that collapsed after the Observer revealed it had misappropriated 87 million Facebook profiles.

More than 100,000 documents relating to work in 68 countries that will lay bare the global infrastructure of an operation used to manipulate voters on “an industrial scale” are set to be released over the next months.

It comes as Christopher Steele, the ex-head of MI6’s Russia desk and the intelligence expert behind the so-called “Steele dossier” into Trump’s relationship with Russia, said that while the company had closed down, the failure to properly punish bad actors meant that the prospects for manipulation of the US election this year were even worse.

The release of documents began on New Year’s Day on an anonymous Twitter account, @HindsightFiles, with links to material on elections in Malaysia, Kenya and Brazil. The documents were revealed to have come from Brittany Kaiser, an ex-Cambridge Analytica employee turned whistleblower, and to be the same ones subpoenaed by Robert Mueller’s investigation into Russian interference in the 2016 presidential election

Source: Fresh Cambridge Analytica leak ‘shows global manipulation is out of control’ | UK news | The Guardian

U.S. government limits exports of artificial intelligence software – seem to have forgotten what happened when they limited cryptographic exports in the 90s

The Trump administration will make it more difficult to export artificial intelligence software as of next week, part of a bid to keep sensitive technologies out of the hands of rival powers like China.

Under a new rule that goes into effect on Monday, companies that export certain types of geospatial imagery software from the United States must apply for a license to send it overseas except when it is being shipped to Canada.

The measure is the first to be finalized by the Commerce Department under a mandate from a 2018 law, which tasked the agency with writing rules to boost oversight of exports of sensitive technology to adversaries like China, for economic and security reasons.

Reuters first reported that the agency was finalizing a set of narrow rules to limit such exports in a boon to U.S. industry that feared a much tougher tougher crackdown on sales abroad.

Source: U.S. government limits exports of artificial intelligence software – Reuters

Just in case you forgot about encryption products, clipper chips etc: US products were weakened with backdoors, which meant a) no-one wanted US products and b) there was a wildfire growth of non-US encryption products. So basically the US goal to limit cryptography failed and at a cost to US producers.

Bosch’s LCD Car Visor Only Blocks Your View of the Road Where the Sun Is In Your Eyes

Instead of a rigid panel wrapped in fabric, Bosch’s Virtual Visor features an LCD panel that can be flipped down when the sun is hanging out on the horizon. The panel works alongside a camera that’s pointed at a driver’s face whose live video feed is processed using a custom trained AI to recognize facial features like the nose, mouth, and, most importantly, the eyes. The camera system should recognize shadows cast on the driver’s eyes, and it uses this ability to darken only the areas on the LCD visor where intense sunlight would be passing through and impairing a driver’s vision. The region of the visor that’s darkened is constantly changing based on both the vehicle and driver’s movements, but the rest should remain transparent to provide a less obstructed view of the road and other vehicles ahead.

The Virtual Visor actually started life as a side project for three of Bosch’s powertrain engineers who developed it in their free time and harvested the parts they needed from a discarded computer monitor. As to when the feature will start showing up as an option in new cars remains to be seen—if ever. If you’ve ever dropped your phone or poked at a screen too hard you’ve already aware of how fragile LCD panels can be, so there will need to be lots of in-vehicle testing before this ever goes mainstream. But it’s a clever innovation using technology that at this point is relatively cheap and readily available, so hopefully this is an upgrade that’s not too far away.

Source: Bosch’s LCD Car Visor Only Blocks Your View of the Road Where the Sun Is In Your Eyes

Smart speaker maker Sonos takes heat for deliberately bricking older kit with ‘Trade Up’ plan

Soundbar and smart-speaker-flinger Sonos is starting the new year with the wrong kind of publicity.

Customers and netizens are protesting against its policy of deliberately rendering working systems unusable, which is bad for the environment as it sends devices prematurely to an electronic waste graveyard.

The policy is also hazardous for those who unknowingly purchase a disabled device on the second-hand market, or even for users who perhaps mistake “recycle” for “reset”.

The culprit is Sonos’s so-called “Trade Up Program” which gives customers a 30 per cent discount off a new device, provided they follow steps to place their existing hardware into “Recycle mode”. Sonos has explained that “when you recycle an eligible Sonos product, you are choosing to permanently deactivate it. Once you confirm you’d like to recycle your product, the decision cannot be reversed.” There is a 21-day countdown (giving you time to receive your shiny new hardware) and then it is useless, “even if the product has been reset to its factory settings.”

Sonos suggests taking the now useless gadget to a local e-waste recycling centre, or sending it back to Sonos, though it remarks that scrapping it locally is “more eco-friendly than shipping it to Sonos”. In fact, agreeing either to return it or to use a “certified electronics recycler” is part of the terms and conditions, though the obvious question is how well this is enforced or whether customers even notice this detail when participating in the scheme.

The truth of course is that no recycling option is eco-friendly in comparison to someone continuing to enjoy the device doing what it does best, which is to play music. Even if a user is conscientious about finding an electronic waste recycling centre, there is a human and environmental cost involved, and not all parts can be recycled.

Sonos has posted on the subject of sustainability and has a “director of sustainability”, Mark Heintz, making its “Trade Up” policy even harder to understand.

Why not allow these products to be resold or reused? Community manager Ryan S said: “While we’re proud of how long our products last, we don’t really want these old, second-hand products to be the first experience a new customer has with Sonos.”

While this makes perfect business sense for Sonos, it is a weak rationale from an environmental perspective. Reactions like this one on Twitter are common. “I’ve bought and recommended my last Sonos product. Please change your practice, at the very least be honest about it and don’t flash the sustainability card for something that’s clearly not.”

Source: Smart speaker maker Sonos takes heat for deliberately bricking older kit with ‘Trade Up’ plan • The Register

The World’s Largest Floating Wind Farm Is Here

This is the second day of the new decade, and the world’s largest floating wind farm is already doing its damn thing and generating electricity.

Located off the coast of Portugal, the WindFloat Atlantic wind farm connected to the grid on New Year’s Eve. And this is only the first of the project’s three platforms. Once all go online, the floating wind farm will be able to produce enough energy for about 60,000 homes a year. Like many European countries (including Denmark and the UK), Portugal has been investing heavily in wind as a viable clean energy option.

Source: The World’s Largest Floating Wind Farm Is Here

This particle accelerator fits on the head of a pin

If you know nothing else about particle accelerators, you probably know that they’re big — sometimes miles long. But a new approach from Stanford researchers has led to an accelerator shorter from end to end than a human hair is wide.

The general idea behind particle accelerators is that they’re a long line of radiation emitters that smack the target particle with radiation at the exact right time to propel it forward a little faster than before. The problem is that depending on the radiation you use and the speed and resultant energy you want to produce, these things can get real big, real fast.

That also limits their applications; you can’t exactly put a particle accelerator in your lab or clinic if they’re half a kilometer long and take megawatts to run. Something smaller could be useful, even if it was nowhere near those power levels — and that’s what these Stanford scientists set out to make.

 

“We want to miniaturize accelerator technology in a way that makes it a more accessible research tool,” explained project lead Jelena Vuckovic in a Stanford news release.

But this wasn’t designed like a traditional particle accelerator like the Large Hadron Collider or one at collaborator SLAC’s National Accelerator Laboratory. Instead of engineering it from the bottom up, they fed their requirements to an “inverse design algorithm” that produced the kind of energy pattern they needed from the infrared radiation emitters they wanted to use.

That’s partly because infrared radiation has a much shorter wavelength than something like microwaves, meaning the mechanisms themselves can be made much smaller — perhaps too small to adequately design the ordinary way.

The algorithm’s solution to the team’s requirements led to an unusual structure that looks more like a Rorschach test than a particle accelerator. But these blobs and channels are precisely contoured to guide infrared laser light pulse in such a way that they push electrons along the center up to a significant proportion of the speed of light.

The resulting “accelerator on a chip” is only a few dozen microns across, making it comfortably smaller than a human hair and more than possible to stack a few on the head of a pin. A couple thousand of them, really.

And it will take a couple thousand to get the electrons up to the energy levels needed to be useful — but don’t worry, that’s all part of the plan. The chips are fully integrated but can be put in a series easily to create longer assemblies that produce larger powers.

These won’t be rivaling macro-size accelerators like SLAC’s or the Large Hadron Collider, but they could be much more useful for research and clinical applications where planet-destroying power levels aren’t required. For instance, a chip-sized electron accelerator might be able to direct radiation into a tumor surgically rather than through the skin.

The team’s work is published in a paper today in the journal Science.

Source: This particle accelerator fits on the head of a pin – TechCrunch

Government exposes addresses of > 1000 new year honours recipients

More than 1,000 celebrities, government employees and politicians who have received honours had their home and work addresses posted on a government website, the Guardian can reveal.

The accidental disclosure of the tranche of personal details is likely to be considered a significant security breach, particularly as senior police and Ministry of Defence staff were among those whose addresses were made public.

Many of the more than a dozen MoD employees and senior counter-terrorism officers who received honours in the new year list had their home addresses revealed in a downloadable list, along with countless others who may believe the disclosure has put them in a vulnerable position.

Prominent public figures including the musician Elton John, the cricketer Ben Stokes, NHS England’s chief executive, Simon Stevens, the politicians Iain Duncan Smith and Diana Johnson, TV chef Nadiya Hussain, and the former director of public prosecutions Alison Saunders were among those whose home addresses were published.

Others included Jonathan Jones, the permanent secretary of the government’s legal department, and John Manzoni, the Cabinet Office permanent secretary. Less well-known figures included academics, Holocaust survivors, prison staff and community and faith leaders.

It is thought the document seen by the Guardian, which contains the details of 1,097 people, went online at 10.30pm on Friday and was taken down in the early hours of Saturday.

The vast majority of people on the list had their house numbers, street names and postcodes included.

Source: Government exposes addresses of new year honours recipients | UK news | The Guardian

Wyze data leak may have exposed personal data of millions of users

Security camera startup Wyze has confirmed it suffered a data leak this month that may have left the personal information of millions of its customers exposed on the internet. No passwords or financial information were exposed, but email addresses, Wi-Fi network IDs and body metrics were left unprotected from Dec. 4 through Dec. 26, the company said Friday.

More than 2.4 million Wyze customers were affected by the leak, according to cybersecurity firm Twelve Security, which first reported on the leak

“We are still looking into this event to figure out why and how this happened,” he wrote.

In an update Sunday, Song said Wyze discovered a second unprotected database during its investigation of the data leak. It’s unclear what information was stored in this database, but Song said passwords and personal financial data weren’t included.

Source: Wyze data leak may have exposed personal data of millions of users – CNET

Researchers detail AI that de-hazes and colorizes underwater photos

Ever notice that underwater images tend to be be blurry and somewhat distorted? That’s because phenomena like light attenuation and back-scattering adversely affect visibility. To remedy this, researchers at Harbin Engineering University in China devised a machine learning algorithm that generates realistic water images, along with a second algorithm that trains on those images to both restore natural color and reduce haze. They say that their approach qualitatively and quantitatively matches the state of the art, and that it’s able to process upwards of 125 frames per second running on a single graphics card.

The team notes that most underwater image enhancement algorithms (such as those that adjust white balance) aren’t based on physical imaging models, making them poorly suited to the task. By contrast, this approach taps a generative adversarial network (GAN) — an AI model consisting of a generator that attempts to fool a discriminator into classifying synthetic samples as real-world samples — to produce a set of images of specific survey sites that are fed into a second algorithm, called U-Net.

The team trained the GAN on a corpus of labeled scenes containing 3,733 images and corresponding depth maps, chiefly of scallops, sea cucumbers, sea urchins, and other such organisms living within indoor marine farms. They also sourced open data sets including NY Depth, which comprises thousands of underwater photographs in total.

Post-training, the researchers compared the results of their twin-model approach to that of baselines. They point out that their technique has advantages in that it’s uniform in its color restoration, and that it recovers green-toned images well without destroying the underlying structure of the original input image. It also generally manages to recover color while maintaining “proper” brightness and contrast, a task at which competing solutions aren’t particularly adept.

It’s worth noting that the researchers’ method isn’t the first to reconstruct frames from damaged footage. Cambridge Consultants’ DeepRay leverages a GAN trained on a data set of 100,000 still images to remove distortion introduced by an opaque pane of glass, and the open source DeOldify project employs a family of AI models including GANs to colorize and restore old images and film footage. Elsewhere, scientists at Microsoft Research Asia in September detailed an end-to-end system for autonomous video colorization; researchers at Nvidia last year described a framework that infers colors from just one colorized and annotated video frame; and Google AI in June introduced an algorithm that colorizes grayscale videos without manual human supervision.

Source: Researchers detail AI that de-hazes and colorizes underwater photos

France slaps Google with $166M antitrust fine for opaque and inconsistent ad rules

France’s competition watchdog has slapped Google with a €150 million (~$166 million) fine after finding the tech giant abused its dominant position in the online search advertising market.

In a decision announced today — following a lengthy investigation into the online ad sector — the competition authority sanctioned Google for adopting what it describes as “opaque and difficult to understand” operating rules for its ad platform, Google Ads, and for applying them in “an unfair and random manner.”

The watchdog has ordered Google to clarify how it draws up rules for the operation of Google Ads and its procedures for suspending accounts. The tech giant will also have to put in place measures to prevent, detect and deal with violations of Google Ads rules.

A Google spokesman told TechCrunch the company will appeal the decision.

The decision — which comes hard on the heels of a market study report by the U.K.’s competition watchdog asking for views on whether Google should be broken up — relates to search ads which appear when a user of Google’s search engine searches for something and ads are served alongside organic search results.

More specifically, it relates to the rules Google applies to its Ads platform which set conditions under which advertisers can broadcast ads — rules the watchdog found to be confusing and inconsistently applied.

It also found Google had changed its position on the interpretation of the rules over time, which it said generated instability for some advertisers who were kept in a situation of legal and economic insecurity.

In France, Google holds a dominant position in the online search market, with its search engine responsible for more than 90% of searches carried out, and holds more than 80% of the online ad market linked to searches, per the watchdog, which notes that that dominance puts requirements on it to define operating rules of its ad platform in an objective, transparent and non-discriminatory manner.

However, it found Google’s wording of ad rules failed to live up to that standard — saying it is “not based on any precise and stable definition, which gives Google full latitude to interpret them according to situations.”

Explaining its decision in a press release, the Autorité de la Concurrence writes [translated by Google Translate]:

[T]he French Competition Authority considers that the Google Ads operating rules imposed by Google on advertisers are established and applied under non-objective, non-transparent and discriminatory conditions. The opacity and lack of objectivity of these rules make it very difficult for advertisers to apply them, while Google has all the discretion to modify its interpretation of the rules in a way that is difficult to predict, and decide accordingly whether the sites comply with them or not. This allows Google to apply them in a discriminatory or inconsistent manner. This leads to damage both for advertisers and for search engine users.

The watchdog’s multi-year investigation of the online ad sector was instigated after a complaint by a company called Gibmedia — which raised an objection more than four years ago after Google closed its Google Ads account without notice.

Source: France slaps Google with $166M antitrust fine for opaque and inconsistent ad rules | TechCrunch

Twitter Warns Millions of Android App Users to Update Immediately

This week, Twitter confirmed a vulnerability in its Android app that could let hackers see your “nonpublic account information” and commandeer your account to send tweets and direct messages.

According to a Twitter Privacy Center blog posted Friday, the (recently patched) security issue could allow hackers to gain control of an account and access data like location information and protected tweets “through a complicated process involving the insertion of malicious code into restricted storage areas of the Twitter app,” potentially putting the app’s millions of users at risk. A tweet from Twitter support later elaborated that the issue was fixed for Android version 7.93.4 (released in November for KitKat) as well as version 8.18 (released in October for Lollipop and newer).

Source: Twitter Warns Millions of Android App Users to Update Immediately

Chinese hacker group caught bypassing 2FA

Security researchers say they found evidence that a Chinese government-linked hacking group has been bypassing two-factor authentication (2FA) in a recent wave of attacks.

The attacks have been attributed to a group the cyber-security industry is tracking as APT20, believed to operate on the behest of the Beijing government, Dutch cyber-security firm Fox-IT said in a report published last week.

The group’s primary targets were government entities and managed service providers (MSPs). The government entities and MSPs were active in fields like aviation, healthcare, finance, insurance, energy, and even something as niche as gambling and physical locks.

Recent APT20 activity

The Fox-IT report comes to fill in a gap in the group’s history. APT20’s hacking goes back to 2011, but researchers lost track of the group’s operations in 2016-2017, when they changed their mode of operation.

Fox-IT’s report documents what the group has been doing over the past two years and how they’ve been doing it.

According to researchers, the hackers used web servers as the initial point of entry into a target’s systems, with a particular focus on JBoss, an enterprise application platform often found in large corporate and government networks.

APT20 used vulnerabilities to gain access to these servers, install web shells, and then spread laterally through a victim’s internal systems.

While on the inside, Fox-IT said the group dumped passwords and looked for administrator accounts, in order to maximize their access. A primary concern was obtaining VPN credentials, so hackers could escalate access to more secure areas of a victim’s infrastructure, or use the VPN accounts as more stable backdoors.

Fox-IT said that despite what appears to be a very prodigious hacking activity over the past two years, “overall the actor has been able to stay under the radar.”

They did so, researchers explain, by using legitimate tools that were already installed on hacked devices, rather than downloading their own custom-built malware, which could have been detected by local security software.

APT20 seen bypassing 2FA

But this wasn’t the thing that stood out the most in all the attacks the Dutch security firm investigated. Fox-IT analysts said they found evidence the hackers connected to VPN accounts protected by 2FA.

How they did it remains unclear; although, the Fox-IT team has their theory. They said APT20 stole an RSA SecurID software token from a hacked system, which the Chinese actor then used on its computers to generate valid one-time codes and bypass 2FA at will.

Normally, this wouldn’t be possible. To use one of these software tokens, the user would need to connect a physical (hardware) device to their computer. The device and the software token would then generate a valid 2FA code. If the device was missing, the RSA SecureID software would generate an error.

rsa-passcode-error.png
Image: Fox-IT

The Fox-IT team explains how hackers might have gone around this issue:

The software token is generated for a specific system, but of course this system specific value could easily be retrieved by the actor when having access to the system of the victim.

As it turns out, the actor does not actually need to go through the trouble of obtaining the victim’s system specific value, because this specific value is only checked when importing the SecurID Token Seed, and has no relation to the seed used to generate actual 2-factor tokens. This means the actor can actually simply patch the check which verifies if the imported soft token was generated for this system, and does not need to bother with stealing the system specific value at all.

In short, all the actor has to do to make use of the 2 factor authentication codes is to steal an RSA SecurID Software Token and to patch 1 instruction, which results in the generation of valid tokens.

rsa-passcode.png
Image: Fox-IT

Wocao

Fox-IT said it was able to investigate APT20’s attacks because they were called in by one of the hacked companies to help investigate and respond to the hacks.

More on these attacks can be found in a report named “Operation Wocao.”

Source: Chinese hacker group caught bypassing 2FA | ZDNet

Twelve Million Phones, One Dataset (no, not  your phone companies’), Zero Privacy – The New York Times

Every minute of every day, everywhere on the planet, dozens of companies — largely unregulated, little scrutinized — are logging the movements of tens of millions of people with mobile phones and storing the information in gigantic data files. The Times Privacy Project obtained one such file, by far the largest and most sensitive ever to be reviewed by journalists. It holds more than 50 billion location pings from the phones of more than 12 million Americans as they moved through several major cities, including Washington, New York, San Francisco and Los Angeles.

Each piece of information in this file represents the precise location of a single smartphone over a period of several months in 2016 and 2017. The data was provided to Times Opinion by sources who asked to remain anonymous because they were not authorized to share it and could face severe penalties for doing so. The sources of the information said they had grown alarmed about how it might be abused and urgently wanted to inform the public and lawmakers.

[Related: How to Track President Trump — Read more about the national security risks found in the data.]

After spending months sifting through the data, tracking the movements of people across the country and speaking with dozens of data companies, technologists, lawyers and academics who study this field, we feel the same sense of alarm. In the cities that the data file covers, it tracks people from nearly every neighborhood and block, whether they live in mobile homes in Alexandria, Va., or luxury towers in Manhattan.

One search turned up more than a dozen people visiting the Playboy Mansion, some overnight. Without much effort we spotted visitors to the estates of Johnny Depp, Tiger Woods and Arnold Schwarzenegger, connecting the devices’ owners to the residences indefinitely.

If you lived in one of the cities the dataset covers and use apps that share your location — anything from weather apps to local news apps to coupon savers — you could be in there, too.

If you could see the full trove, you might never use your phone the same way again.

A typical day at Grand Central Terminal
in New York City
Satellite imagery: Microsoft

The data reviewed by Times Opinion didn’t come from a telecom or giant tech company, nor did it come from a governmental surveillance operation. It originated from a location data company, one of dozens quietly collecting precise movements using software slipped onto mobile phone apps. You’ve probably never heard of most of the companies — and yet to anyone who has access to this data, your life is an open book. They can see the places you go every moment of the day, whom you meet with or spend the night with, where you pray, whether you visit a methadone clinic, a psychiatrist’s office or a massage parlor.

[…]

The companies that collect all this information on your movements justify their business on the basis of three claims: People consent to be tracked, the data is anonymous and the data is secure.

None of those claims hold up, based on the file we’ve obtained and our review of company practices.

Yes, the location data contains billions of data points with no identifiable information like names or email addresses. But it’s child’s play to connect real names to the dots that appear on the maps.

[…]

In most cases, ascertaining a home location and an office location was enough to identify a person. Consider your daily commute: Would any other smartphone travel directly between your house and your office every day?

Describing location data as anonymous is “a completely false claim” that has been debunked in multiple studies, Paul Ohm, a law professor and privacy researcher at the Georgetown University Law Center, told us. “Really precise, longitudinal geolocation information is absolutely impossible to anonymize.”

“D.N.A.,” he added, “is probably the only thing that’s harder to anonymize than precise geolocation information.”

[Work in the location tracking industry? Seen an abuse of data? We want to hear from you. Using a non-work phone or computer, contact us on a secure line at 440-295-5934, @charliewarzel on Wire or email Charlie Warzel and Stuart A. Thompson directly.]

Yet companies continue to claim that the data are anonymous. In marketing materials and at trade conferences, anonymity is a major selling point — key to allaying concerns over such invasive monitoring.

To evaluate the companies’ claims, we turned most of our attention to identifying people in positions of power. With the help of publicly available information, like home addresses, we easily identified and then tracked scores of notables. We followed military officials with security clearances as they drove home at night. We tracked law enforcement officers as they took their kids to school. We watched high-powered lawyers (and their guests) as they traveled from private jets to vacation properties. We did not name any of the people we identified without their permission.

The data set is large enough that it surely points to scandal and crime but our purpose wasn’t to dig up dirt. We wanted to document the risk of underregulated surveillance.

Watching dots move across a map sometimes revealed hints of faltering marriages, evidence of drug addiction, records of visits to psychological facilities.

Connecting a sanitized ping to an actual human in time and place could feel like reading someone else’s diary.

[…]

The inauguration weekend yielded a trove of personal stories and experiences: elite attendees at presidential ceremonies, religious observers at church services, supporters assembling across the National Mall — all surveilled and recorded permanently in rigorous detail.

Protesters were tracked just as rigorously. After the pings of Trump supporters, basking in victory, vanished from the National Mall on Friday evening, they were replaced hours later by those of participants in the Women’s March, as a crowd of nearly half a million descended on the capital. Examining just a photo from the event, you might be hard-pressed to tie a face to a name. But in our data, pings at the protest connected to clear trails through the data, documenting the lives of protesters in the months before and after the protest, including where they lived and worked.

[…]

Inauguration Day weekend was marked by other protests — and riots. Hundreds of protesters, some in black hoods and masks, gathered north of the National Mall that Friday, eventually setting fire to a limousine near Franklin Square. The data documented those rioters, too. Filtering the data to that precise time and location led us to the doorsteps of some who were there. Police were present as well, many with faces obscured by riot gear. The data led us to the homes of at least two police officers who had been at the scene.

As revealing as our searches of Washington were, we were relying on just one slice of data, sourced from one company, focused on one city, covering less than one year. Location data companies collect orders of magnitude more information every day than the totality of what Times Opinion received.

Data firms also typically draw on other sources of information that we didn’t use. We lacked the mobile advertising IDs or other identifiers that advertisers often combine with demographic information like home ZIP codes, age, gender, even phone numbers and emails to create detailed audience profiles used in targeted advertising. When datasets are combined, privacy risks can be amplified. Whatever protections existed in the location dataset can crumble with the addition of only one or two other sources.

There are dozens of companies profiting off such data daily across the world — by collecting it directly from smartphones, creating new technology to better capture the data or creating audience profiles for targeted advertising.

The full collection of companies can feel dizzying, as it’s constantly changing and seems impossible to pin down. Many use technical and nuanced language that may be confusing to average smartphone users.

While many of them have been involved in the business of tracking us for years, the companies themselves are unfamiliar to most Americans. (Companies can work with data derived from GPS sensors, Bluetooth beacons and other sources. Not all companies in the location data business collect, buy, sell or work with granular location data.)

A Selection of Companies Working

in the Location Data Business

Sources: MightySignal, LUMA Partners and AppFigures.

Location data companies generally downplay the risks of collecting such revealing information at scale. Many also say they’re not very concerned about potential regulation or software updates that could make it more difficult to collect location data.

[…]

Does it really matter that your information isn’t actually anonymous? Location data companies argue that your data is safe — that it poses no real risk because it’s stored on guarded servers. This assurance has been undermined by the parade of publicly reported data breaches — to say nothing of breaches that don’t make headlines. In truth, sensitive information can be easily transferred or leaked, as evidenced by this very story.

We’re constantly shedding data, for example, by surfing the internet or making credit card purchases. But location data is different. Our precise locations are used fleetingly in the moment for a targeted ad or notification, but then repurposed indefinitely for much more profitable ends, like tying your purchases to billboard ads you drove past on the freeway. Many apps that use your location, like weather services, work perfectly well without your precise location — but collecting your location feeds a lucrative secondary business of analyzing, licensing and transferring that information to third parties.

The data contains simple information like date, latitude and longitude, making it easy to inspect, download and transfer. Note: Values are randomized to protect sources and device owners.

For many Americans, the only real risk they face from having their information exposed would be embarrassment or inconvenience. But for others, like survivors of abuse, the risks could be substantial. And who can say what practices or relationships any given individual might want to keep private, to withhold from friends, family, employers or the government? We found hundreds of pings in mosques and churches, abortion clinics, queer spaces and other sensitive areas.

In one case, we observed a change in the regular movements of a Microsoft engineer. He made a visit one Tuesday afternoon to the main Seattle campus of a Microsoft competitor, Amazon. The following month, he started a new job at Amazon. It took minutes to identify him as Ben Broili, a manager now for Amazon Prime Air, a drone delivery service.

“I can’t say I’m surprised,” Mr. Broili told us in early December. “But knowing that you all can get ahold of it and comb through and place me to see where I work and live — that’s weird.” That we could so easily discern that Mr. Broili was out on a job interview raises some obvious questions, like: Could the internal location surveillance of executives and employees become standard corporate practice?

[…]

If this kind of location data makes it easy to keep tabs on employees, it makes it just as simple to stalk celebrities. Their private conduct — even in the dead of night, in residences and far from paparazzi — could come under even closer scrutiny.

Reporters hoping to evade other forms of surveillance by meeting in person with a source might want to rethink that practice. Every major newsroom covered by the data contained dozens of pings; we easily traced one Washington Post journalist through Arlington, Va.

In other cases, there were detours to hotels and late-night visits to the homes of prominent people. One person, plucked from the data in Los Angeles nearly at random, was found traveling to and from roadside motels multiple times, for visits of only a few hours each time.

While these pointillist pings don’t in themselves reveal a complete picture, a lot can be gleaned by examining the date, time and length of time at each point.

Large data companies like Foursquare — perhaps the most familiar name in the location data business — say they don’t sell detailed location data like the kind reviewed for this story but rather use it to inform analysis, such as measuring whether you entered a store after seeing an ad on your mobile phone.

But a number of companies do sell the detailed data. Buyers are typically data brokers and advertising companies. But some of them have little to do with consumer advertising, including financial institutions, geospatial analysis companies and real estate investment firms that can process and analyze such large quantities of information. They might pay more than $1 million for a tranche of data, according to a former location data company employee who agreed to speak anonymously.

Location data is also collected and shared alongside a mobile advertising ID, a supposedly anonymous identifier about 30 digits long that allows advertisers and other businesses to tie activity together across apps. The ID is also used to combine location trails with other information like your name, home address, email, phone number or even an identifier tied to your Wi-Fi network.

The data can change hands in almost real time, so fast that your location could be transferred from your smartphone to the app’s servers and exported to third parties in milliseconds. This is how, for example, you might see an ad for a new car some time after walking through a dealership.

That data can then be resold, copied, pirated and abused. There’s no way you can ever retrieve it.

Location data is about far more than consumers seeing a few more relevant ads. This information provides critical intelligence for big businesses. The Weather Channel app’s parent company, for example, analyzed users’ location data for hedge funds, according to a lawsuit filed in Los Angeles this year that was triggered by Times reporting. And Foursquare received much attention in 2016 after using its data trove to predict that after an E. coli crisis, Chipotle’s sales would drop by 30 percent in the coming months. Its same-store sales ultimately fell 29.7 percent.

Much of the concern over location data has focused on telecom giants like Verizon and AT&T, which have been selling location data to third parties for years. Last year, Motherboard, Vice’s technology website, found that once the data was sold, it was being shared to help bounty hunters find specific cellphones in real time. The resulting scandal forced the telecom giants to pledge they would stop selling location movements to data brokers.

Yet no law prohibits them from doing so.

[…]

If this information is so sensitive, why is it collected in the first place?

For brands, following someone’s precise movements is key to understanding the “customer journey” — every step of the process from seeing an ad to buying a product. It’s the Holy Grail of advertising, one marketer said, the complete picture that connects all of our interests and online activity with our real-world actions.

Once they have the complete customer journey, companies know a lot about what we want, what we buy and what made us buy it. Other groups have begun to find ways to use it too. Political campaigns could analyze the interests and demographics of rally attendees and use that information to shape their messages to try to manipulate particular groups. Governments around the world could have a new tool to identify protestors.

Pointillist location data also has some clear benefits to society. Researchers can use the raw data to provide key insights for transportation studies and government planners. The City Council of Portland, Ore., unanimously approved a deal to study traffic and transit by monitoring millions of cellphones. Unicef announced a plan to use aggregated mobile location data to study epidemics, natural disasters and demographics.

For individual consumers, the value of constant tracking is less tangible. And the lack of transparency from the advertising and tech industries raises still more concerns.

Does a coupon app need to sell second-by-second location data to other companies to be profitable? Does that really justify allowing companies to track millions and potentially expose our private lives?

Data companies say users consent to tracking when they agree to share their location. But those consent screens rarely make clear how the data is being packaged and sold. If companies were clearer about what they were doing with the data, would anyone agree to share it?

What about data collected years ago, before hacks and leaks made privacy a forefront issue? Should it still be used, or should it be deleted for good?

If it’s possible that data stored securely today can easily be hacked, leaked or stolen, is this kind of data worth that risk?

Is all of this surveillance and risk worth it merely so that we can be served slightly more relevant ads? Or so that hedge fund managers can get richer?

The companies profiting from our every move can’t be expected to voluntarily limit their practices. Congress has to step in to protect Americans’ needs as consumers and rights as citizens.

Until then, one thing is certain: We are living in the world’s most advanced surveillance system. This system wasn’t created deliberately. It was built through the interplay of technological advance and the profit motive. It was built to make money. The greatest trick technology companies ever played was persuading society to surveil itself.

Source: Opinion | Twelve Million Phones, One Dataset, Zero Privacy – The New York Times