At Blind – a whistleblower site -, a security lapse revealed private complaints from Silicon Valley employees. Turns out it’s not very safe to blow your whistle there after all.

Thousands of people trusted Blind, an app-based “anonymous social network,” as a safe way to reveal malfeasance, wrongdoing and improper conduct at their companies.But Blind left one of its database servers exposed without a password, making it possible (for anyone who knew where to look) to access each user’s account information and identify would-be whistleblowers.

[…]

The exposed server was found by a security researcher, who goes by the name Mossab H, who informed the company of the security lapse. The security researcher found one of the company’s Kibana dashboards for its backend ElasticSearch database, which contained several tables, including private messaging data and web-based content, for both of its U.S. and Korean sites. Blind said the exposure only affects users who signed up or logged in between November 1 and December 19, and that the exposure relates to “a single server, one among many servers on our platform,” according to Blind executive Kyum Kim in an email.

Blind only pulled the database after TechCrunch followed up by email a week later. The company began emailing its users on Thursday after we asked for comment.

“While developing an internal tool to improve our service for our users, we became aware of an error that exposed user data,” the email to affected users said.

Kim said there is “no evidence” that the database was misappropriated or misused, but did not say how it came to that conclusion. When asked, the company would not say if it will notify U.S. state regulators of the breach.

[…]

At its core, the app and anonymous social network allows users to sign up using their corporate email address, which is said to be linked only to Blind’s member ID. Email addresses are “only used for verification” to allow users to talk to other anonymous people in their company, and the company claims that email addresses aren’t stored on its servers.

But after reviewing a portion of the exposed data, some of the company’s claims do not stand up.

We found that the database provided a real-time stream of user logins, user posts, comments and other interactions, allowing anyone to read private comments and posts. The database also revealed the unencrypted private messages between members but not their associated email addresses. (Given the high sensitivity of the data and the privacy of the affected users, we’re not posting data, screenshots or specifics of user content.)

Blind claims on its website that its email verification “is safe, as our patented infrastructure is set up so that all user account and activity information is completely disconnected from the email verification process.” It adds: “This effectively means there is no way to trace back your activity on Blind to an email address, because even we can’t do it.” Blind claims that the database “does not show any mapping of email addresses to nicknames,” but we found streams of email addresses associated with members who had not yet posted. In our brief review, we didn’t find any content, such as comments or messages, linked to email addresses, just a unique member ID, which could identify a user who posts in the future.

Many records did, however, contain plain text email addresses. When other records didn’t store an email address, the record contained the user’s email as an unrecognized encrypted hash — which may be decipherable to Blind employees, but not to anyone else.

The database also contained passwords, which were stored as an MD5 hash, a long-outdated algorithm that is nowadays easy to crack. Many of the passwords were quickly unscrambled using readily available tools when we tried. Kim denied this. “We don’t use MD5 for our passwords to store them,” he said. “The MD5 keys were a log and it does not represent how we are managing data. We use more advanced methods like salted hash and SHA2 on securing users’ data in our database.” (Logging in with an email address and unscrambled password would be unlawful, therefore we cannot verify this claim.) That may pose a risk to employees who use the same password on the app as they do to log in to their corporate accounts.

Despite the company’s apparent efforts to disassociate email addresses from its platform, login records in the database also stored user account access tokens — the same kind of tokens that recently put Microsoft and Facebook accounts at risk. If a malicious actor took and used a token, they could log in as that user — effectively removing any anonymity they might have had from the database in the first place.

As well-intentioned as the app may be, the database exposure puts users — who trusted the app to keep their information safe and their identities anonymous — at risk.

These aren’t just users, but also employees of some of the largest companies in Silicon Valley, who post about sexual harassment in the workplace and discussing job offers and workplace culture. Many of those who signed up in the past month include senior executives at major tech companies but don’t realize that their email address — which identifies them — could be sitting plain text in an exposed database. Some users sent anonymous, private messages, in some cases made serious allegations against their colleagues or their managers, while others expressed concern that their employers were monitoring their emails for Blind sign-up emails.

Yet, it likely escaped many that the app they were using — often for relief, for empathy or as a way to disclose wrongdoing — was almost entirely unencrypted and could be accessed, not only by the app’s employees but also for a time anyone on the internet.

Source: At Blind, a security lapse revealed private complaints from Silicon Valley employees | TechCrunch

New Photo Wake-Up System Turns Still Images Into 3D animations

The system, called Photo Wake-Up, creates a 3D animation from a single photo. In the paper, the researchers compare it to the moving portraits at Hogwarts, a fictitious part of the Harry Potter world that a number of tech companies have tried to recreate. Previous attempts have been mildly successful, but this system is impressive in its ability to isolate and create a pretty realistic 3D animation from a single image.

The researchers tested the system on 70 different photos they downloaded online, which included pictures of Stephen Curry, the anime character Goku, a Banksy artwork, and a Picasso painting. The team used a program called SMPL and deep learning, starting with a 2D cutout of the subject and then superimposing a 3D skeleton onto it. “Our key technical contribution, then, is a method for constructing an animatable 3D model that matches the silhouette in a single photo,” the team told MIT Technology Review.

The team reportedly used a warping algorithm to ensure the cutout and the skeleton were aligned. The team’s algorithm is also reportedly able to detect the direction a subject is looking and the way their head is angled. What’s more, in order to make sure the final animation is realistic and precise, the team used a proprietary user interface to correct for any errors and help with the animation’s texturing. An algorithm then isolates the subject from the 2D image, fills in the remaining space, and animates the subject.

Source: New Photo Wake-Up System Turns Still Images Into 3D animations

An Amoeba-Based Computer Calculated Approximate Solutions to an 8 city Travelling Salesman Problem

A team of Japanese researchers from Keio University in Tokyo have demonstrated that an amoeba is capable of generating approximate solutions to a remarkably difficult math problem known as the “traveling salesman problem.”

The traveling salesman problem goes like this: Given an arbitrary number of cities and the distances between them, what is the shortest route a salesman can take that visits each city and returns to the salesman’s city of origin. It is a classic problem in computer science and is used as a benchmark test for optimization algorithms.

The traveling salesman problem is considered “NP hard,” which means that the complexity of calculating a correct solution increases exponentially the more cities are added to the problem. For example, there are only three possible solutions if there are four cities, but there are 360 possible solutions if there are six cities. It continues to increase exponentially from there.

Despite the exponential increase in computational difficulty with each city added to the salesman’s itinerary, computer scientists have been able to calculate the optimal solution to this problem for thousands of cities since the early 90s and recent efforts have been able to calculate nearly optimal solutions for millions of cities.

Amoebas are single-celled organisms without anything remotely resembling a central nervous system, which makes them seem like less than suitable candidates for solving such a complex puzzle. Yet as these Japanese researchers demonstrated, a certain type of amoeba can be used to calculate nearly optimal solutions to the traveling salesman problem for up to eight cities. Even more remarkably, the amount of time it takes the amoeba to reach these nearly optimal solutions grows linearly, even though the number of possible solutions increases exponentially.

As detailed in a paper published this week in Royal Society Open Science, the amoeba used by the researchers is called Physarum polycephalum, which has been used as a biological computer in several other experiments. The reason this amoeba is considered especially useful in biological computing is because it can extend various regions of its body to find the most efficient way to a food source and hates light.

To turn this natural feeding mechanism into a computer, the Japanese researcher placed the amoeba on a special plate that had 64 channels that it could extend its body into. This plate is then placed on top of a nutrient rich medium. The amoeba tries to extend its body to cover as much of the plate as possible and soak up the nutrients. Yet each channel in the plate can be illuminated, which causes the light-averse amoeba to retract from that channel.

To model the traveling salesman problem, each of the 64 channels on the plate was assigned a city code between A and H, in addition to a number from 1 to 8 that indicates the order of the cities. So, for example, if the amoeba extended its body into the channels A3, B2, C4, and D1, the correct solution to the traveling salesman problem would be D, B, A, C, D. The reason for this is that D1 indicates that D should be the first city in the salesman’s itinerary, B2 indicates B should be the second city, A3 that A should be the third city and so on.

To guide the amoeba toward a solution to the traveling salesman problem, the researchers used a neural network that would incorporate data about the amoeba’s current position and distance between the cities to light up certain channels. The neural network was designed such that cities with greater distances between them are more likely to be illuminated than channels that are not.

When the algorithm manipulates the chip that the amoeba is on it is basically coaxing it into taking forms that represent approximate solutions to the traveling salesman problem. As the researchers told Phys.org, they expect that it would be possible to manufacture chips that contain tens of thousands of channels so that the amoeba is able to solve traveling salesman problems that involve hundreds of cities.

For now, however, the Japanese researchers’ experiment remains in the lab, but it provides the foundation for low-energy biological computers that harness the natural mechanisms of amoebas and other microorganisms to compute.

Source: An Amoeba-Based Computer Calculated Approximate Solutions to a Very Hard Math Problem – Motherboard

FCC fines Swarm $900,000 for unauthorized satellite launch

Swarm Technologies Inc will pay a $900,000 fine for launching and operating four small experimental communications satellites that risked “satellite collisions” and threatened “critical commercial and government satellite operations,” the Federal Communications Commission said on Thursday.

The Federal Communications Commission (FCC) logo is seen before the FCC Net Neutrality hearing in Washington February 26, 2015. REUTERS/Yuri Gripas

The California-based start-up founded by former Google and Apple engineers in 2016 also agreed to enhanced FCC oversight and a requirement of pre-launch notices to the FCC for three years.

Swarm launched the satellites in India last January after the FCC rejected its application to deploy and operate them, citing concerns about the company’s tracking ability.

It said Swarm had unlawfully transmitted signals between earth stations in the state of Georgia and the satellites for over a week. The investigation also found that Swarm performed unauthorized weather balloon-to-ground station tests and other unauthorized equipment tests prior to the satellites’ launch.

Swarm aims to provide low-cost space-based internet service and plans eventually to use a constellation of 100 satellites.

Swarm won permission in August from the FCC to reactivate the satellites and said then it is “fully committed to complying with all regulations and has been working closely with the FCC,” noting that its satellites are “100 percent trackable.”

Source: FCC fines Swarm $900,000 for unauthorized satellite launch | Reuters

EU Diplomatic Comms Network, Which the NSA Reportedly Warned Could Be Easily Hacked, Was Hacked. But contents were boring.

The European Union’s network used for diplomatic communications, COREU, was infiltrated “for years” by hackers, the New York Times reported on Tuesday, with the unknown rogues behind the attack reportedly reposting the stolen communiqués to an “open internet site.”

The network in question connects EU leadership with other EU organizations, as well as the foreign ministries of member states. According to the Times, the attack was first discovered by security firm Area 1, which provided a bit more than 1,100 of the cables to the paper for examination. Some of the documents show unease over Donald Trump’s presidency and his relationship with the Russian government, while others contain tidbits such as Chinese President Xi Jinping’s feelings about the U.S.’s brimming trade war with his country and rumors about nuclear weapons deployment on the Crimean peninsula:

In one cable, European diplomats described a meeting between President Trump and President Vladimir V. Putin of Russia in Helsinki, Finland, as “successful (at least for Putin).”

Another cable, written after a July 16 meeting, relayed a detailed report and analysis of a discussion between European officials and President Xi Jinping of China, who was quoted comparing Mr. Trump’s “bullying” of Beijing to a “no-rules freestyle boxing match” … The cables include extensive reports by European diplomats of Russia’s moves to undermine Ukraine, including a warning on Feb. 8 that Crimea, which Moscow annexed four years ago, had been turned into a “hot zone where nuclear warheads might have already been deployed.”

Hackers were able to breach COREU after a phishing campaign aimed at officials in Cyprus gave them access to passwords that compromised the whole network, Area 1 chief executive Oren Falkowitz told the Times. An anonymous official at the U.S.’s National Security Agency added that the agency had warned the EU had received numerous warnings that the aging system could easily be infiltrated by malicious parties.

[…]

Fortunately for the EU, the Times wrote, the stolen information is primarily “low-level classified documents that were labeled limited and restricted,” while more sensitive communiqués were sent via a separate system (EC3IS) that European officials said is being upgraded and replaced. Additionally, although the documents were uploaded to an “open internet site,” the hackers apparently made no effort to publicize them, the paper added.

Source: EU Diplomatic Comms Network, Which the NSA Reportedly Warned Could Be Easily Hacked, Was Hacked

This AI Just Mapped Every Solar Panel in the United States

n some states, solar energy accounts for upwards of 10 percent of total electricity generation. It’s definitely a source of power that’s on the rise, whether it be to lessen our dependence on fossil fuels, nuclear power, or the energy grid, or simply to take advantage of the low costs. This form of energy, however, is highly decentralized, so it’s tough to know how much solar energy is being extracted, where, and by whom.

[…]

The system developed by Rajagopal, along with his colleagues Jiafan Yu and Zhecheng Wang, is called DeepSolar, and it’s an automated process whereby hi-res satellite photos are analyzed by an algorithm driven by machine learning. DeepSolar can identify solar panels, register their locations, and calculate their size. The system identified 1.47 million individual solar installations across the United States, whether they be small rooftop configurations, solar farms, or utility-scale systems. This exceeds the previous estimate of 1.02 million installations. The researchers have made this data available at an open-source website.

By using this new approach, the researchers were able to accurately scan billions of tiles of high-resolution satellite imagery covering the continental U.S., allowing them to classify and measure the size of solar systems in a few weeks rather than years, as per previous methods. Importantly, DeepSolar requires minimal human supervision.

DeepSolar map of solar panel usage across the United States.
Image: Deep Solar/Stanford University

“The algorithm breaks satellite images into tiles. Each tile is processed by a deep neural net to produce a classification for each pixel in a tile. These classifications are combined together to detect if a system—or part of—is present in the tile,” Rajagopal told Gizmodo.

The neural net can then determine which tile is a solar panel, and which is not. The network architecture is such that after training, the layers of the network produce an activation map, also known as a heat map, that outlines the panels. This can be used to obtain the size of each solar panel system.

Source: This AI Just Mapped Every Solar Panel in the United States

Turning Off Facebook Location Services Doesn’t Stop Tracking – you have to hide your IP address

Aleksandra Korolova has turned off Facebook’s access to her location in every way that she can. She has turned off location history in the Facebook app and told her iPhone that she “Never” wants the app to get her location. She doesn’t “check-in” to places and doesn’t list her current city on her profile.

Despite all this, she constantly sees location-based ads on Facebook. She sees ads targeted at “people who live near Santa Monica” (where she lives) and at “people who live or were recently near Los Angeles” (where she works as an assistant professor at the University of Southern California). When she traveled to Glacier National Park, she saw an ad for activities in Montana, and when she went on a work trip to Cambridge, Massachusetts, she saw an ad for a ceramics school there.

Facebook was continuing to track Korolova’s location for ads despite her signaling in all the ways that she could that she didn’t want Facebook doing that.

This was especially perturbing for Korolova, as she recounts on Medium, because she has studied the privacy harms that come from Facebook advertising, including how it could be previously used to gather data about an individual’s likes, estimated income and interests (for which she and her co-author Irfan Faizullabhoy got a $2,000 bug bounty from Facebook), and how it can currently be used to target ads at a single house or building, if, say, an anti-choice group wanted to target women at a Planned Parenthood with an ad for baby clothes.

Korolova thought Facebook must be getting her location information from the IP addresses she used to log in from, which Facebook says it collects for security purposes. (It wouldn’t be the first time Facebook used information gathered for security purposes for advertising ones; advertisers can target Facebook users with the phone number they provided for two-factor protection of their account.) As the New York Times recently reported, lots of apps are tracking users’ movements with surprising granularity. The Times suggested turning off location services in your phone’s privacy settings to stop the tracking, but even then the apps can still get location information, by looking at the wifi network you use or your IP address.

When asked about this, Facebook said that’s exactly what it’s doing and that it considers this a completely normal thing to do and that users should know this will happen if they closely read various Facebook websites.

“Facebook does not use WiFi data to determine your location for ads if you have Location Services turned off,” said a Facebook spokesperson by email. “We do use IP and other information such as check-ins and current city from your profile. We explain this to people, including in our Privacy Basics site and on the About Facebook Ads site.”

On Privacy Basics, Facebook gives advice for “how to manage your privacy” with regards to location but says that regardless of what you do, Facebook can still “understand your location using things like… information about your Internet connection.” This is reiterated on the “About Facebook Ads” site that says that ads might be based on your location which is garnered from “where you connect to the Internet” among other things.

Strangely, back in 2014, Facebook told businesses in a blog post that “people have control over the recent location information they share with Facebook and will only see ads based on their recent location if location services are enabled on their phone.” Apparently, that policy has changed. (Facebook said it would update this old post.)

Hey, maybe this is to be expected. You need an IP address to use the internet and, by the nature of how the internet works, you reveal it to an app or a website when you use them (though you can hide your IP address by using one provided by the Tor browser or a VPN). There are various companies that specialize in mapping the locations of IP addresses, and while it can sometimes be wildly inaccurate, an IP address will give you a rough approximation of your whereabouts, such as the state, city or zip code you are currently in. Many websites use IP address-derived location to personalize their offerings, and many advertisers use it to show targeted online ads. It means showing you ads for restaurants in San Francisco if you live there instead of ads for restaurants in New York. In that context, Facebook using this information to do the same thing is not terribly unusual.

“There is no way for people to opt out of using location for ads entirely,” said a Facebook spokesperson by email. “We use city and zip level location which we collect from IP addresses and other information such as check-ins and current city from your profile to ensure we are providing people with a good service—from ensuring they see Facebook in the right language, to making sure that they are shown nearby events and ads for businesses that are local to them.”

Source: Turning Off Facebook Location Services Doesn’t Stop Tracking

NASA fears internal server hacked, staff personal info swiped by miscreants

A server containing personal information, including social security numbers, of current and former NASA workers may have been hacked, and its data stolen, it emerged today.

According to an internal memo circulated among staff on Tuesday, in mid-October the US space agency investigated whether or not two of its machines holding employee records had been compromised, and discovered one of them may have been infiltrated by miscreants.

It was further feared that this sensitive personal data had been siphoned from the hijacked server. The agency’s top brass stressed no space missions were affected, and identity theft protection will be offered to all affected workers, past and present. The boffinry nerve-center’s IT staff have since secured the servers, and are combing through other systems to ensure they are fully defended, we’re told.

Anyone who joined, left, or transferred within the agency from July 2006 to October 2018 may have had their personal records swiped, according to NASA bosses. Right now, the agency employs roughly 17,300 people.

Source: Houston, we’ve had a problem: NASA fears internal server hacked, staff personal info swiped by miscreants • The Register

Facebook Allowed Netflix, Spotify and A Bank To Read And Delete Users’ Private Messages. And around 150 other companies got to see other private information without user consent.

Facebook gave more than 150 companies, including Microsoft, Netflix, Spotify, Amazon, and Yahoo, unprecedented access to users’ personal data, according to a New York Times report published Tuesday.

The Times obtained hundreds of pages of Facebook documents, generated in 2017, that show that the social network considered these companies business partners and effectively exempted them from its privacy rules.

Facebook allowed Microsoft’s search engine Bing to see the names of nearly all users’ friends without their consent, and allowed Spotify, Netflix, and the Royal Bank of Canada to read, write, and delete users’ private messages, and see participants on a thread.

It also allowed Amazon to get users’ names and contact information through their friends, let Apple access users’ Facebook contacts and calendars even if users had disabled data sharing, and let Yahoo view streams of friends’ posts “as recently as this summer,” despite publicly claiming it had stopped sharing such information a year ago, the report said. Collectively, applications made by these technology companies sought the data of hundreds of millions of people a month.

On Tuesday night, a Facebook spokesperson explained to BuzzFeed News that the social media giant solidified different types of partnerships with major tech and media companies for specific reasons. Apple, Amazon, Yahoo, and Microsoft, for example, were known as “integration partners,” and Facebook helped them build versions of the app “for their own devices and operating systems,” the spokesperson said.

Facebook solidified its first partnerships around 2009–2010, when the company was still a fledgling social network. Many of them were still active in 2017, the spokesperson said. The Times reported that some of them were still in effect this year.

Around 2010, Facebook linked up with Spotify, the Bank of Canada, and Netflix. Once a user logged in and connected their Facebook profile with these accounts, these companies had access to that person’s private messages. The spokesperson confirmed that there are probably other companies that also had this capability, but stressed that these partners were removed in 2015 and, “right now there is no evidence of any misuse of data.”

Other companies, such as Bing and Pandora, were able to see users’ public information, like their friend lists and what types of songs and movies they liked.

Source: Facebook Allowed Netflix, Spotify, And A Bank To Read And Delete Users’ Private Messages

The finger here is being justly pointed at Facebook – but what they are missing is the other companies also knew they were acting unethically by asking for and using this information. It also shows that privacy is something that none of these companies respect and the only way of safeguarding it is by having legal frameworks that respect it.

Amazon and Facebook Reportedly Had a Secret Data-Sharing Agreement, and It Explains So Much

Back in 2015, a woman named Imy Santiago wrote an Amazon review of a novel that she had read and liked. Amazon immediately took the review down and told Santiago she had “violated its policies.” Santiago re-read her review, didn’t see anything objectionable about it, so she tried to post it again. “You’re not eligible to review this product,” an Amazon prompt informed her.

When she wrote to Amazon about it, the company told her that her “account activity indicates you know the author personally.” Santiago did not know the author, so she wrote an angry email to Amazon and blogged about Amazon’s “big brother” surveillance.

I reached out to both Santiago and Amazon at the time to try to figure out what the hell happened here. Santiago, who is an indie book writer herself, told me that she’d been in the same ballroom with the author in New York a few months before at a book signing event, but had not talked to her, and that she had followed the author on Twitter and Facebook after reading her books. Santiago had never connected her Facebook account to Amazon, she said.

Amazon wouldn’t tell me much back in 2015. Spokesperson Julie Law told me by email at the time that the company “didn’t comment on individual accounts” but said, “when we detect that elements of a reviewer’s Amazon account match elements of an author’s Amazon account, we conclude that there is too much risk of review bias. This can erode customer trust, and thus we remove the review. I can assure you that we investigate each case.”

“We have built mechanisms, both manual and automated over the years that detect, remove or prevent reviews which violate guidelines,” Law added.

A new report in the New York Times about Facebook’s surprising level of data-sharing with other technology companies may shed light on those mechanisms:

Facebook allowed Microsoft’s Bing search engine to see the names of virtually all Facebook users’ friends without consent, the records show, and gave Netflix and Spotify the ability to read Facebook users’ private messages.

The social network permitted Amazon to obtain users’ names and contact information through their friends, and it let Yahoo view streams of friends’ posts as recently as this summer, despite public statements that it had stopped that type of sharing years earlier.

If Amazon was sucking up data from Facebook about who knew whom, it may explain why Santiago’s review was blocked. Because Santiago had followed the author on Facebook, Amazon or its algorithms would see her name and contact information as being connected to the author there, according to the Times. Facebook reportedly didn’t let users know this data-sharing was happening nor get their consent, so Santiago, as well as the author presumably, wouldn’t have known this had happened.

Amazon declined to tell the New York Times about its data-sharing deal with Facebook but “said it used the information appropriately.” I asked Amazon how it was using the data obtained from Facebook, and whether it used it to make connections like the one described by Santiago. The answer was underwhelming.

“Amazon uses APIs provided by Facebook in order to enable Facebook experiences for our products,” said an Amazon spokesperson in a statement that didn’t quite answer the question. “For example, giving customers the option to sync Facebook contacts on an Amazon Tablet. We use information only in accordance with our privacy policy.”

Amazon declined our request to comment further.

Why was Facebook giving out this data about its users to other tech giants? The Times report is frustratingly vague, but it says Facebook “got more users” by partnering with the companies (though it’s unclear how), but also that it got data in return, specifically data that helped power its People You May Know recommendations. Via the Times:

The Times reviewed more than 270 pages of reports generated by the system — records that reflect just a portion of Facebook’s wide-ranging deals. Among the revelations was that Facebook obtained data from multiple partners for a controversial friend-suggestion tool called “People You May Know.”

The feature, introduced in 2008, continues even though some Facebook users have objected to it, unsettled by its knowledge of their real-world relationships. Gizmodo and other news outlets have reported cases of the tool’s recommending friend connections between patients of the same psychiatrist, estranged family members, and a harasser and his victim.

Facebook, in turn, used contact lists from the partners, including Amazon, Yahoo and the Chinese company Huawei — which has been flagged as a security threat by American intelligence officials — to gain deeper insight into people’s relationships and suggest more connections, the records show.

‘You scratch my algorithm’s back. I’ll scratch your algorithm’s back,’ or so the arrangement apparently went.

Back in 2017, I asked Facebook whether it was getting information from “third parties such as data brokers” to help power its creepily accurate friend recommendations. A spokesperson told me by email, “Facebook does not use information from data brokers for People You May Know,” in what now seems to be a purposefully evasive answer.

Facebook doesn’t want to tell us how its systems work. Amazon doesn’t want to tell us how its systems work. These companies are data mining us, sometimes in concert, to make uncomfortably accurate connections but also erroneous assumptions. They don’t want to tell us how they do it, suggesting they know it’s become too invasive to reveal. Thank god for leakers and lawsuits.

Source: Amazon and Facebook Reportedly Had a Secret Data-Sharing Agreement, and It Explains So Much

Ancient Hidden City Discovered Under Lake Titicaca

Five minutes away from the town of Tiquina, on the shores of Lake Titicaca, archaeologists found the remains of an ancient civilization under the waters of the lake.

The find was made 10 years ago, by Christophe Delaere, an archaeologist from the Free University of Belgium, by following information provided by the locals. 24 submerged archaeological sites have been identified under the lake, according to the BBC.

The most significant of these sites is at Santiago de Ojjelaya, and the Bolivian government has recently agreed to build a museum there to preserve both the underwater structures and those which are on land.

Lake Titicaca. Photo by Alex Proimos CC BY SA 2.0

The project is supposed to be finished in 2020 and will cost an estimated $10 million. The Bolivian government is funding the project with help from UNESCO and is backed by the Belgian development cooperation agency.

The proposed building will have two parts and cover an area of about 2.3 acres (9,360 square meters). One part of the museum will be on the shore, and it will display artifacts that have been raised from the lake bottom. The second part will be partially submerged, with enormous glass walls that will look out under the lake, allowing visitors to see the “hidden city” below.

Old pottery from Tiwanaku at the Ethnologisches Museum, Berlin-Dahlem.

According to the Bolivia Travel Channel, the museum will facilitate the beginning of an archaeological tourism enterprise, which “will be a resort and archaeology research center, geology and biology, characteristics that typified it unique in the world [sic],” according to Wilma Alanoca Mamani, holder of the portfolio of the Plurinational State. Christophe Delaere said that the building’s design incorporates elements of architecture used by the Andean cultures who inhabited the area.

Jose Luis Paz, who is the director of heritage for Bolivia’s Ministry of Culture, says that two types of underwater ruins will be visible when the building is complete: religious/spiritual offering sites, primarily underwater, and places where people lived and worked, which were primarily on the shoreline. He went on to say that the spiritual sites were likely flooded much later than the settlements.

Chullpas from Tiwanaku epoch. Photo by Diego Delso CC BY-SA 4.0

A team of archaeological divers and Bolivian and Belgian experts have located thousands of items in the underwater sites. Some of these pieces will be brought up, but the majority will remain underwater as they are quite well-preserved.

Wilma Mamani said that more than 10,000 items have been found including gold and ceramic pieces and various kinds of bowls and other vessels. The items are of pre-Inca Tiwanaku civilizations. Some of the artifacts have been estimated to be 2,000 years old, and others have been dated back to when the Tiwanaku empire was one of the primary Andean civilizations.

Gateway of the Sun, Tiwanaku, drawn by Ephraim Squier in 1877.

Tiwanaku was a major civilization in Bolivia, with the main city built around 13,000 feet above sea level, near Lake Titicaca, which made it one of the highest urban centers ever built.

The city reached its zenith between 500 AD and 1000 AD, and, at its height, was home to about 10,000 people. It’s unclear exactly when the civilization took hold, but it is known that people started settling around Lake Titicaca about 2,000 BC.

The Gateway of the Sun from the Tiwanaku civilization in Bolivia.

According to Live Science, the city’s ancient name is unknown, since they never developed a written language, but archaeological evidence suggests that Tiwanaku cultural influence reached across the southern Andes, into Argentina, Peru, and Chile, as well as Bolivia.

Tiwanaku began to decline around 1,000 AD, and the city was eventually abandoned. Even when it fell out of use, it stayed an important place in the mythology of the Andean people, who viewed it as a religious site.

Source: Ancient Hidden City Discovered Under Lake Titicaca

Machine learning-detected signal predicts time to earthquake

Machine-learning research published in two related papers today in Nature Geoscience reports the detection of seismic signals accurately predicting the Cascadia fault’s slow slippage, a type of failure observed to precede large earthquakes in other subduction zones.

Los Alamos National Laboratory researchers applied machine learning to analyze Cascadia data and discovered the megathrust broadcasts a constant tremor, a fingerprint of the fault’s displacement. More importantly, they found a direct parallel between the loudness of the fault’s acoustic signal and its physical changes. Cascadia’s groans, previously discounted as meaningless noise, foretold its fragility.

“Cascadia’s behavior was buried in the data. Until machine learning revealed precise patterns, we all discarded the continuous signal as noise, but it was full of rich information. We discovered a highly predictable sound pattern that indicates slippage and fault failure,” said Los Alamos scientist Paul Johnson. “We also found a precise link between the fragility of the fault and the signal’s strength, which can help us more accurately predict a megaquake.”

Read more at: https://phys.org/news/2018-12-machine-learning-detected-earthquake.html#jCp

Source: Machine learning-detected signal predicts time to earthquake

Google isn’t the company that we should have handed the Web over to: why MS switching to Chromium is a bad idea

With Microsoft’s decision to end development of its own Web rendering engine and switch to Chromium, control over the Web has functionally been ceded to Google. That’s a worrying turn of events, given the company’s past behavior.

[…]

Google is already a company that exercises considerable influence over the direction of the Web’s development. By owning both the most popular browser, Chrome, and some of the most-visited sites on the Web (in particular the namesake search engine, YouTube, and Gmail), Google has on a number of occasions used its might to deploy proprietary tech and put the rest of the industry in the position of having to catch up.

[…]

This is a company that, time and again, has tried to push the Web into a Google-controlled proprietary direction to improve the performance of Google’s online services when used in conjunction with Google’s browser, consolidating Google’s market positioning and putting everyone else at a disadvantage. Each time, pushback has come from the wider community, and so far, at least, the result has been industry standards that wrest control from Google’s hands. This action might already provoke doubts about the wisdom of handing effective control of the Web’s direction to Google, but at least a case could be made that, in the end, the right thing was done.

But other situations have had less satisfactory resolutions. YouTube has been a particular source of problems. Google controls a large fraction of the Web’s streaming video, and the company has, on a number of occasions, made changes to YouTube that make it worse in Edge and/or Firefox. Sometimes these changes have improved the site experience in Chrome, but even that isn’t always the case.

A person claiming to be a former Edge developer has today described one such action. For no obvious reason, Google changed YouTube to add a hidden, empty HTML element that overlaid each video. This element disabled Edge’s fastest, most efficient hardware accelerated video decoding. It hurt Edge’s battery-life performance and took it below Chrome’s. The change didn’t improve Chrome’s performance and didn’t appear to serve any real purpose; it just hurt Edge, allowing Google to claim that Chrome’s battery life was actually superior to Edge’s. Microsoft asked Google if the company could remove the element, to no avail.

The latest version of Edge addresses the YouTube issue and reinstated Edge’s performance. But when the company talks of having to do extra work to ensure EdgeHTML is compatible with the Web, this is the kind of thing that Microsoft has been forced to do.

[…]

Microsoft’s decision both gives Google an ever-larger slice of the pie and weakens Microsoft’s position as an opposing voice. Even with Edge and Internet Explorer having a diminished share of the market, Microsoft has retained some sway; its IIS Web server commands a significant Web presence, and there’s still value in having new protocols built in to Windows, as it increases their accessibility to software developers.

But now, Microsoft is committed to shipping and supporting whatever proprietary tech Google wants to develop, whether Microsoft likes it or not. Microsoft has been very explicit that its adoption of Chromium is to ensure maximal Chrome compatibility, and the company says that it is developing new engineering processes to ensure that it can rapidly integrate, test, and distribute any changes from upstream—it doesn’t ever want to be in the position of substantially lagging behind Google’s browser.

[…]

Web developers have historically only bothered with such trivia as standards compliance and as a way to test their pages in multiple browsers when the market landscape has forced them to. This is what made Firefox’s early years so painful: most developers tested in Internet Explorer and nothing else, leaving Firefox compatibility to chance. As Firefox, and later Chrome, rose to challenge Internet Explorer’s dominance, cross-browser testing became essential, and standards adherence became more valuable.

With Chrome, Firefox, and Edge all as going concerns, a fair amount of discipline is imposed on Web developers. But with Edge removed and Chrome taking a large majority of the market, making the effort to support Firefox becomes more expensive.

Mozilla CEO Chris Beard fears that this consolidation could make things harder for Mozilla—an organization that exists to ensure that the Web remains a competitive landscape that offers meaningful options and isn’t subject to any one company’s control. Mozilla’s position is already tricky, dependent as it is on Google’s funding.

[…]

By relegating Firefox to being the sole secondary browser, Microsoft has just made it that much harder to justify making sites work in Firefox. The company has made designing for Chrome and ignoring everything else a bit more palatable, and Mozilla’s continued existence is now that bit more marginal. Microsoft’s move puts Google in charge of the direction of the Web’s development. Google’s track record shows it shouldn’t be trusted with such a position.

Source: Google isn’t the company that we should have handed the Web over to | Ars Technica

Google’s Feature for Predicting Flight Delays

Google is adding its flight delay predictions feature to the Google Assistant.

That means starting this holiday season, you should be able to ask the Google Assistant if your flight is on time and get a response showing the status of your flight, the length of a delay (if there is one), and even the cause (assuming that info is available)

“Over the next few weeks,” Google says its flight delay predictor will also start notifying you in cases where its system is 85 percent confident, which is deduced by looking at data from past flight records and combining that with a bit a machine learning smarts to determine if your flight might be late. That leaves some room for error, so it’s also important to note that even when Google predicts that your flight is delayed, it may still recommend for you to show up to the airport normally.

Still, in the space of a year, Google seems to have upped its confidence threshold for predicted delays from 80 to 85 percent

Source: Google’s Feature for Predicting Flight Delays Actually Sounds Useful Now

‘Farout,’ the most-distant solar system object discovered yet

For the first time, an object in our solar system has been found more than 100 times farther than Earth is from the sun.

The International Astronomical Union’s Minor Planet Center announced the discovery Monday, calling the object 2018 VG18. But the researchers who found it are calling it “Farout.”
They believe the spherical object is a dwarf planet more than 310 miles in diameter, with a pinkish hue. That color has been associated with objects that are rich in ice, and given its distance from the sun, that isn’t hard to believe. Its slow orbit probably takes more than 1,000 years to make one trip around the sun, the researchers said.
The distance between the Earth and the sun is an AU, or astronomical unit — the equivalent of about 93 million miles. Farout is 120 AU from the sun. Eris, the next most distant object known, is 96 AU from the sun. For reference, Pluto is 34 AU away.
The object was found by the Carnegie Institution for Science’s Scott S. Sheppard, the University of Hawaii’s David Tholen and Northern Arizona University’s Chad Trujillo — and it’s not their first discovery.
The team has been searching for a super-Earth-size planet on the edge of our solar system, known as Planet Nine or Planet X, since 2014. They first suggested the existence of this possible planet in 2014 after finding “Biden” at 84 AU. Along the way, they have discovered more distant solar system objects suggesting that the gravity of something massive is influencing their orbit.

Source: ‘Farout,’ the most-distant solar system object discovered – CNN

Researchers demonstrate teleportation using on-demand photons from quantum dots

A team of researchers from Austria, Italy and Sweden has successfully demonstrated teleportation using on-demand photons from quantum dots. In their paper published in the journal Science Advances, the group explains how they accomplished this feat and how it applies to future quantum communications networks.

Scientists and many others are very interested in developing truly —it is believed that such networks will be safe from hacking or eavesdropping due to their very nature. But, as the researchers with this new effort point out, there are still some problems standing in the way. One of these is the difficulty in amplifying signals. One way to get around this problem, they note, is to generate photons on-demand as part of a quantum repeater—this helps to effectively handle the high clock rates. In this new effort, they have done just that, using semiconductor .

Prior work surrounding the possibility of using has shown that it is a feasible way to demonstrate teleportation, but only under certain conditions, none of which allowed for on-demand applications. Because of that, they have not been considered a push-button technology. In this new effort, the researchers overcame this problem by creating quantum dots that were highly symmetrical using an etching method to create the hole pairs in which the quantum dots develop. The process they used was called a XX (biexciton)–X (exciton) cascade. They then employed a dual-pulsed excitation scheme to populate the desired XX state (after two pairs shed photons, they retained their entanglement). Doing so allowed for the production of on-demand single photons suitable for use in teleportation. The dual pulsed excitation scheme was critical to the process, the team notes, because it minimized re-excitation.

The researchers tested their process first on subjective inputs and then on different quantum dots, proving that it could work across a broad range of applications. They followed that up by creating a framework that other researchers could use as a guide in replicating their efforts. But they also acknowledged that there is still more work to be done (mostly in raising the clock rates) before the could be used in real-world applications. They expect it will be just a few more years.

Read more at: https://phys.org/news/2018-12-teleportation-on-demand-photons-quantum-dots.html#jCp

Source: Researchers demonstrate teleportation using on-demand photons from quantum dots

An AI system has just created the most realistic looking photos ever

AI systems can now create images of humans that are so lifelike they look like photographs, except the people in them don’t really exist.

See for yourself. Each picture below is an output produced by a generative adversarial network (GAN), a system made up of two different networks including a generator and a discriminator. Developers have used GANs to create everything from artwork to dental crowns.

styleGAN

Some of the images created from Nvidia’s style transfer GAN. Image credit: Karras et al. and Nvidia

The performance of a GAN is often tied to how realistic its results are. What started out as tiny, blurry, greyscale images of human faces four years ago, has since morphed into full colour portraits.

oldGAN

Early results from when the idea of GANs were first introduced. Image credit: Goodfellow et al.

The new GAN built by Nvidia researchers rests on the idea of “style transfer”. First, the generator network learns a constant input taken from a photograph of a real person. This face is used as a reference, and encoded as a vector that is mapped to a latent space that describe all the features in the image.

These features correlate to the essential characteristics that make up a face: eyes, nose, mouth, hair, pose, face shape, etc. After the generator learns these features it can begin adjusting these details to create a new face.

The transformation that determines how the appearance of these features change is determined from another secondary photo. In other words, the original photo copies the style of another photo so the end result is a sort of mishmash between both images. Finally, an element of noise is also added to generate random details, such as the exact placement of hairs, stubble, freckles, or skin pores, to make the images

“Our generator thinks of an image as a collection of ‘styles,” where each style controls the effects at a particular scale,” the researchers explained. The different features can be broken down into various styles: Coarse styles include the pose, hair, face shape; Middle styles are made up of facial features; and Fine styles determines the overall colour.

styleGAN_2

How the different style types are learned and transferred by crossing a photo with a source photo. Image credit: Kerras et al. and Nvidia.

The different style types can, therefore, be crossed continuously with other photos to generate a range of completely new images to cover pictures of people of different ethnicities, genders and ages. You can watch a video demonstration of this happening below.

The discriminator network inspects the images coming from the generator and tries to work out if they’re real or fake. The generator improves over time so that its outputs consistently trick the discriminator.

Source: An AI system has just created the most realistic looking photos ever • The Register

Report: Johnson & Johnson Knew About Asbestos in Its Baby Powder Products for Decades

An explosive new report by Reuters released Friday may upturn the narrative surrounding the potential cancer risks of talcum powder. According to the report, Johnson & Johnson—the makers of the most popular consumer talc product, Baby Powder—knew for decades that its products at times contained carcinogenic asbestos, but did everything possible to keep its findings shrouded from the public and even health officials.

The report’s allegations are sourced from hundreds of internal company documents, according to Reuters, which the news agency has also made available to the public. Many of the documents were obtained during the course of legal battles waged against Johnson & Johnson over the years by customers alleging its products had caused their cancers; others were obtained by various journalists and news organizations.

Collectively, the documents seem to paint a damning picture of the company’s actions—and inaction—surrounding its products.

Talc is a soft white clay pulled up from the earth in mines. In these mines, asbestos—a broad term for six kinds of minerals that can be found in long, thin fibers—is regularly found alongside deposits of talc. But for decades, the company assured the public and regulators that its products were free of asbestos, even as some internal and independent tests found otherwise, according to the report.

Per Reuters:

In 1976, as the U.S. Food and Drug Administration (FDA) was weighing limits on asbestos in cosmetic talc products, J&J assured the regulator that no asbestos was “detected in any sample” of talc produced between December 1972 and October 1973. It didn’t tell the agency that at least three tests by three different labs from 1972 to 1975 had found asbestos in its talc – in one case at levels reported as “rather high.”

Reuters reports that the company was particularly sneaky in handling the first known lawsuit from a former customer, Darlene Coker, who alleged in 1997 that its products had caused her mesothelioma, a form of lung cancer. According to the Reuters report, J&J successfully denied requests by Coker’s attorney to turn over internal documents that would have demonstrated the presence of asbestos in its mining operations and products (Coker’s lungs were shown to be loaded with the sort of asbestos often seen in workers who are exposed to talc in large quantities). Without the documents, Coker dropped the case in 1999 and died a decade later.

Since Coker’s failed lawsuit, there have been more than 11,000 plaintiffs who have alleged that J&J’s products caused their cancers, according to Reuters. Many of these lawsuits, which often did not assert that asbestos contamination might have been the major contributing factor, have similarly failed, but cases that have gone to trial have resulted in verdicts in favor of the plaintiff. Just this July, a Missouri jury ordered the company to pay $4.69 billion in damages to 22 women and their families. In 2017, however, a California judge reversed a $417 million verdict and ordered a new trial.

Source: Report: Johnson & Johnson Knew About Asbestos in Its Baby Powder Products for Decades

Pornhub 2018 in review

Follow along to see the most interesting data points amassed by our team of statisticians, all presented with colorful charts and insightful commentary. Enjoy!

The Year in Numbers
Top Searches & Pornstars
Traffic & Time on Site
Gender Demographics
Age Demographics
Devices & Technology
Celebrity Searches
Movie & Game Searches
Events, Holidays & Sports
Top 20 Countries in Depth

Source: https://www.pornhub.com/insights/2018-year-in-review

Team that invented way to enlarge objects now invents method to shrink objects to the nanoscale, decreasing their volume 100x

MIT researchers have invented a way to fabricate nanoscale 3-D objects of nearly any shape. They can also pattern the objects with a variety of useful materials, including metals, quantum dots, and DNA.

“It’s a way of putting nearly any kind of material into a 3-D pattern with nanoscale precision,” says Edward Boyden, an associate professor of biological engineering and of brain and cognitive sciences at MIT.

Using the , the researchers can create any shape and structure they want by patterning a with a laser. After attaching other useful materials to the scaffold, they shrink it, generating structures one thousandth the volume of the original.

These tiny structures could have applications in many fields, from optics to medicine to robotics, the researchers say. The technique uses equipment that many biology and materials science labs already have, making it widely accessible for researchers who want to try it.

Boyden, who is also a member of MIT’s Media Lab, McGovern Institute for Brain Research, and Koch Institute for Integrative Cancer Research, is one of the senior authors of the paper, which appears in the Dec. 13 issue of Science. The other senior author is Adam Marblestone, a Media Lab research affiliate, and the paper’s lead authors are graduate students Daniel Oran and Samuel Rodriques.

Implosion fabrication

Existing techniques for creating nanostructures are limited in what they can accomplish. Etching patterns onto a surface with light can produce 2-D nanostructures but doesn’t work for 3-D structures. It is possible to make 3-D nanostructures by gradually adding layers on top of each other, but this process is slow and challenging. And, while methods exist that can directly 3-D print nanoscale objects, they are restricted to specialized materials like polymers and plastics, which lack the functional properties necessary for many applications. Furthermore, they can only generate self-supporting structures. (The technique can yield a solid pyramid, for example, but not a linked chain or a hollow sphere.)

To overcome these limitations, Boyden and his students decided to adapt a technique that his lab developed a few years ago for high-resolution imaging of brain tissue. This technique, known as expansion microscopy, involves embedding tissue into a hydrogel and then expanding it, allowing for high resolution imaging with a regular microscope. Hundreds of research groups in biology and medicine are now using expansion microscopy, since it enables 3-D visualization of cells and tissues with ordinary hardware.

By reversing this process, the researchers found that they could create large-scale objects embedded in expanded hydrogels and then shrink them to the nanoscale, an approach that they call “implosion fabrication.”

As they did for , the researchers used a very absorbent material made of polyacrylate, commonly found in diapers, as the scaffold for their nanofabrication process. The scaffold is bathed in a solution that contains molecules of fluorescein, which attach to the scaffold when they are activated by laser light.

Using two-photon microscopy, which allows for precise targeting of points deep within a structure, the researchers attach fluorescein molecules to specific locations within the gel. The fluorescein molecules act as anchors that can bind to other types of molecules that the researchers add.

“You attach the anchors where you want with light, and later you can attach whatever you want to the anchors,” Boyden says. “It could be a quantum dot, it could be a piece of DNA, it could be a gold nanoparticle.”

“It’s a bit like film photography—a latent image is formed by exposing a sensitive material in a gel to light. Then, you can develop that latent image into a real image by attaching another material, silver, afterwards. In this way implosion fabrication can create all sorts of structures, including gradients, unconnected structures, and multimaterial patterns,” Oran says.

Once the desired molecules are attached in the right locations, the researchers shrink the entire structure by adding an acid. The acid blocks the negative charges in the polyacrylate gel so that they no longer repel each other, causing the gel to contract. Using this technique, the researchers can shrink the objects 10-fold in each dimension (for an overall 1,000-fold reduction in volume). This ability to shrink not only allows for increased resolution, but also makes it possible to assemble materials in a low-density scaffold. This enables easy access for modification, and later the material becomes a dense solid when it is shrunk.

“People have been trying to invent better equipment to make smaller nanomaterials for years, but we realized that if you just use existing systems and embed your in this gel, you can shrink them down to the nanoscale, without distorting the patterns,” Rodriques says.

Currently, the researchers can create objects that are around 1 cubic millimeter, patterned with a resolution of 50 nanometers. There is a tradeoff between size and resolution: If the researchers want to make larger objects, about 1 cubic centimeter, they can achieve a resolution of about 500 nanometers. However, that resolution could be improved with further refinement of the process, the researchers say.

Read more at: https://phys.org/news/2018-12-team-method-nanoscale.html#jCp

Source: Team invents method to shrink objects to the nanoscale

How to Stop Windows 10 From Collecting Activity Data on You – after disabling activity tracking option

Another day, another tech company being disingenuous about its privacy practices. This time it’s Microsoft, after it was discovered that Windows 10 continues to track users’ activity even after they’ve disabled the activity-tracking option in their Windows 10 settings.

You can try it yourself. Pull up Windows 10’s Settings, go to the Privacy section, and disable everything in your Activity History. Give it a few days. Visit the Windows Privacy Dashboard online, and you’ll find that some applications, media, and even browsing history still shows up.

Application data found on the Windows Privacy Dashboard website
Screenshot: Brendan Hesse

Sure, this data can be manually deleted, but the fact that it’s being tracked at all is not a good look for Microsoft, and plenty of users have expressed their frustration online since the oversight was discovered. Luckily, Reddit user a_potato_is_missing found a workaround that blocks Windows and the Windows Store from tracking your PC activity, which comes from a tutorial originally posted by Tenforums user Shawn Brink.

We gave Brink’s strategy a shot and found it to be an effective workaround worth sharing for those who want to limit Microsoft’s activity-tracking for good. It’s a simple process that only requires you to download and open some files, but we’ll guide you through the steps since there a few caveats you’ll want to know.

How to disable the activity tracker in Windows 10

Brink’s method works by editing values in your Window Registry to block the Activity Tracker (via a .REG file). For transparency, here’s what changes the file makes:

HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\System

PublishUserActivities DWORD

0 = Disable
1 = Enable

These changes only apply to Activity Tracking and shouldn’t affect your operating system in any other way. Still, if something does go wrong, you can reverse this process, which is explained in step 7. To get started with Brink’s alterations:

  1. Download the “Disable_Activity_history.reg” file from Brink’s tutorial to any folder you want.
  2. Double-click on the .REG file to open it, and then click “Run” to begin applying the changes to your registry.
  3. You will get the usual Window UAC notification to allow the file to make changes to your computer. Click “Yes.”
  4. A warning box will pop up alerting you that making changes to your registry can result in applications and features not working, or cause system errors—all of which is true, but we haven’t run into any issues from applying this fix. If you’re cool with that, click “Yes” to apply the changes. The process should happen immediately, after which you’ll get one final dialogue box informing you of the information added to the registry. Click “OK” to close the file and wrap up the registry change.
  5. After the registry edit is complete, you’ll need to sign out of Windows (press Windows Key+X then Shut down or Sign out>Sign out) then sign back in to apply the registry changes.
  6. When you sign back in, your activity will no longer be tracked by Windows, even the stuff that was slipping through before.
  7. To reverse the registry changes and re-enable the Activity Tracker, download the “Enable_Activity_history.reg” file also found on the Tenforums tutorial, then follow the same steps above.

Update 12/13/2018 at 12:30pm PT: Microsoft has released a statement to Neowin about the aforementioned “Activity History.” Here’s the statement from Windows & devices group privacy officer Marisa Rogers:

“Microsoft is committed to customer privacy, being transparent about the data we collect and use for your benefit, and we give you controls to manage your data. In this case, the same term ‘Activity History’ is used in both Windows 10 and the Microsoft Privacy Dashboard. Windows 10 Activity History data is only a subset of the data displayed in the Microsoft Privacy Dashboard. We are working to address this naming issue in a future update.”

As Neowin notes, Microsoft says there are two settings you should look into if you want to keep your PC from uploading your activity data:

“One is to go to Settings -> Privacy -> Activity history, and make sure that ‘Let Windows sync my activities from this PC to the cloud’ is unchecked. Also, you can go to Settings -> Privacy -> Diagnostics & feedback, and make sure that it’s set to basic.”

Source: How to Stop Windows 10 From Collecting Activity Data on You

Virgin Galactic flight sends first astronauts to edge of space – successfully. Are you looking, Elon?

Virgin Galactic completed its longest rocket-powered flight ever on Thursday, taking a step ahead in the nascent business of space tourism.

The two pilots on board Virgin Galactic’s spacecraft Unity became the company’s first astronauts. Virgin Group founder Richard Branson was on hand to watch the historic moment.

“Many of you will know how important the dream of space travel is to me personally. Ever since I watched the moon landings as a child I have looked up to the skies with wonder,” Branson said after the flight. “This is a momentous day and I could not be more proud of our teams who together have opened a new chapter of space exploration.”

Virgin Galactic said the test flight reached an altitude of 51.4 miles, or nearly 83 kilometers. The U.S. military and NASA consider pilots who have flown above 80 kilometers to be astronauts. The Federal Aviation Administration announced on Thursday that pilots Mark Stucky and C.J Sturckow would receive commercial astronaut wings at a ceremony in Washington, D.C. early next year.

Lifted by the jet-powered mothership Eve, the spacecraft Unity took off from the Mojave Air and Space Port in the California desert. Upon reaching an altitude above 40,000 feet, the carrier aircraft released Unity. The two-member crew then piloted the spacecraft in a roaring burn which lasted 60 seconds. The flight pushed Unity to a speed of Mach 2.9, nearly three times the speed of sound, as it screamed into a climb toward the edge of space.

After performing a slow backflip in microgravity, Unity turned and glided back to land at Mojave. This was the company’s fourth rocket-powered flight of its test program.

Unity is the name of the spacecraft built by The Spaceship Company, which Branson also owns. This rocket design is officially known as SpaceShipTwo (SS2).

Unity also carried four NASA-funded payloads on this mission. The agency said the four technology experiments “will collect valuable data needed to mature the technologies for use on future missions.”

“Inexpensive access to suborbital space greatly benefits the technology research and broader spaceflight communities,” said Ryan Dibley, NASA’s flight opportunities campaign manager, in a statement.

The spacecraft underwent extensive engine testing and seven glide tests before Virgin Galactic said it was ready for a powered test flight — a crucial milestone before the company begins sending tourists to the edge of the atmosphere. Each of the previous three test flights were successful in pushing the spacecraft’s limits farther.

Source: Virgin Galactic flight sends first astronauts to edge of space

Yes, it can be done without rockets exploding all over the place or going the wrong direction. Well done, this is how commercial space flight should look.

Taylor Swift Show Used to Stalk Visitors with Hidden Face Recognition in Kiosk Displays

At a Taylor Swift concert earlier this year, fans were reportedly treated to something they might not expect: a kiosk displaying clips of the pop star that served as a covert surveillance system. It’s a tale of creeping 21st-century surveillance as unnerving as it is predictable. But the whole ordeal has left us wondering what the hell is going on.

As Rolling Stone first reported, the kiosk was allegedly taking photos of concertgoers and running them through a facial recognition database in an effort to identify any of Swift’s stalkers. But the dragnet effort reportedly involved snapping photos of anyone who stared into the kiosk’s watchful abyss.

“Everybody who went by would stop and stare at it, and the software would start working,” Mike Downing, chief security officer at live entertainment company Oak View Group and its subsidiary Prevent Advisors, told Rolling Stone. Downing was at Swift’s concert, which took place at the Rose Bowl in Los Angeles in May, to check out a demo of the system. According to Downing, the photos taken by the camera inside of the kiosk were sent to a “command post” in Nashville. There, the images were scanned against images of hundreds of Swift’s known stalkers, Rolling Stone reports.

The Rolling Stone report has taken off in the past day, with Quartz, Vanity Fair, the Hill, the Verge, Business Insider, and others picking up the story. But the only real information we have is from Downing. And so far no one has answered some key questions—including the Oak View Group and Prevent Advisors, which have not responded to multiple requests for comment.

For starters, who is running this face recognition system? Was Taylor Swift or her people informed this reported measure would be in place? Were concertgoers informed that their photos were being taken and sent to a facial recognition database in another state? Were the photos stored, and if so, where and for how long? There were reportedly more than 60,000 people at the Rose Bowl concert—how many of those people had their mug snapped by the alleged spybooth? Did the system identify any Swift stalkers—and, if they did, what happened to those people?

It also remains to be seen whether there was any indication on the kiosk that it was snapping fans’ faces. But as Quartz pointed out, “concert venues are typically private locations, meaning even after security checkpoints, its owners can subject concert-goers to any kind of surveillance they want, including facial recognition.”

Source: Taylor Swift Show Used to Demo Face Recognition: Report

Very very creepy

Scientists identify vast underground ecosystem containing billions of micro-organisms

The Earth is far more alive than previously thought, according to “deep life” studies that reveal a rich ecosystem beneath our feet that is almost twice the size of all the world’s oceans.

Despite extreme heat, no light, minuscule nutrition and intense pressure, scientists estimate this subterranean biosphere is teeming with between 15bn and 23bn tonnes of micro-organisms, hundreds of times the combined weight of every human on the planet.

Researchers at the Deep Carbon Observatory say the diversity of underworld species bears comparison to the Amazon or the Galápagos Islands, but unlike those places the environment is still largely pristine because people have yet to probe most of the subsurface.

“It’s like finding a whole new reservoir of life on Earth,” said Karen Lloyd, an associate professor at the University of Tennessee in Knoxville. “We are discovering new types of life all the time. So much of life is within the Earth rather than on top of it.”

The team combines 1,200 scientists from 52 countries in disciplines ranging from geology and microbiology to chemistry and physics. A year before the conclusion of their 10-year study, they will present an amalgamation of findings to date before the American Geophysical Union’s annual meeting opens this week.

Samples were taken from boreholes more than 5km deep and undersea drilling sites to construct models of the ecosystem and estimate how much living carbon it might contain.

The results suggest 70% of Earth’s bacteria and archaea exist in the subsurface, including barbed Altiarchaeales that live in sulphuric springs and Geogemma barossii, a single-celled organism found at 121C hydrothermal vents at the bottom of the sea.

One organism found 2.5km below the surface has been buried for millions of years and may not rely at all on energy from the sun. Instead, the methanogen has found a way to create methane in this low energy environment, which it may not use to reproduce or divide, but to replace or repair broken parts.

Lloyd said: “The strangest thing for me is that some organisms can exist for millennia. They are metabolically active but in stasis, with less energy than we thought possible of supporting life.”

Rick Colwell, a microbial ecologist at Oregon State University, said the timescales of subterranean life were completely different. Some microorganisms have been alive for thousands of years, barely moving except with shifts in the tectonic plates, earthquakes or eruptions.

Source: Scientists identify vast underground ecosystem containing billions of micro-organisms | Science | The Guardian