The Linkielist

Linking ideas with the world

The Linkielist

Humans may be able to live on Mars within walls of aerogel – a wonder material that can trap heat and block radiation

We may be able to survive and live on Mars in regions protected by thin ceilings of silica aerogel, a strong lightweight material that insulates heat and blocks harmful ultraviolet radiation while weighing almost nothing.

Researchers at Harvard University in the US, NASA, and the University of Edinburgh in Scotland envision areas of Mars enclosed by two to three-centimetre-thick walls of silica aerogel. The strange material is ghost-like in appearance, and although it’s up to 99.98 per cent air, it’s actually a solid.

Aerogels come in various shapes and forms with their own mix of properties. Typically, they are made from sucking out the liquid in a gel using something called a supercritical dryer device. The resulting aerogel consists of pockets of air, and is therefore ultralight and can be capable of trapping heat. It can also be made hydrophobic or semi-porous as needed.

The semitransparent solid, therefore, has odd properties that may just help humans colonize the Red Planet. The solid silica can be manufactured to block out, say, dangerous UV rays while allowing visible light through.

However, it’s the trapping of heat that is most interesting here. When the boffins shone a lamp onto a thin block of silica aerogel, measuring less than 3cm thick, they found that the surface beneath the material warmed up to 65 degrees Celsius (that’s 150 degrees Fahrenheit for you Americans), high enough, of course, to melt ice into water. The results were published in Nature Astronomy on Monday.

Welcome to the Hotel Aerogel

The academics reckon if a region of ice near the higher latitudes of Mars was covered with a layer of aerogel, then the frosty ground would melt to produce liquid water as the environment heats up. It’d also be warm enough for humans to live and farm food in order to survive in the otherwise harsh, acrid conditions elsewhere the planet.

“The ideal place for a Martian outpost would have plentiful water and moderate temperatures,” said Laura Kerber, co-author of the paper and a geologist at NASA’s Jet Propulsion Laboratory. “Mars is warmer around the equator, but most of the water ice is located at higher latitudes. Building with silica aerogel would allow us to artificially create warm environments where there is already water ice available.”

Source: Humans may be able to live on Mars within walls of aerogel – a wonder material that can trap heat and block radiation • The Register

Machine learning has been used to automatically translate long-lost languages

Jiaming Luo and Regina Barzilay from MIT and Yuan Cao from Google’s AI lab in Mountain View, California. This team has developed a machine-learning system capable of deciphering lost languages, and they’ve demonstrated it by having it decipher Linear B—the first time this has been done automatically. The approach they used was very different from the standard machine translation techniques.

First some background. The big idea behind machine translation is the understanding that words are related to each other in similar ways, regardless of the language involved.

So the process begins by mapping out these relations for a specific language. This requires huge databases of text. A machine then searches this text to see how often each word appears next to every other word. This pattern of appearances is a unique signature that defines the word in a multidimensional parameter space. Indeed, the word can be thought of as a vector within this space. And this vector acts as a powerful constraint on how the word can appear in any translation the machine comes up with.

These vectors obey some simple mathematical rules. For example: king – man + woman = queen. And a sentence can be thought of as a set of vectors that follow one after the other to form a kind of trajectory through this space.

The key insight enabling machine translation is that words in different languages occupy the same points in their respective parameter spaces. That makes it possible to map an entire language onto another language with a one-to-one correspondence.

In this way, the process of translating sentences becomes the process of finding similar trajectories through these spaces. The machine never even needs to “know” what the sentences mean.

This process relies crucially on the large data sets. But a couple of years ago, a German team of researchers showed how a similar approach with much smaller databases could help translate much rarer languages that lack the big databases of text. The trick is to find a different way to constrain the machine approach that doesn’t rely on the database.

Now Luo and co have gone further to show how machine translation can decipher languages that have been lost entirely. The constraint they use has to do with the way languages are known to evolve over time.

The idea is that any language can change in only certain ways—for example, the symbols in related languages appear with similar distributions, related words have the same order of characters, and so on. With these rules constraining the machine, it becomes much easier to decipher a language, provided the progenitor language is known.  

Luo and co put the technique to the test with two lost languages, Linear B and Ugaritic. Linguists know that Linear B encodes an early version of ancient Greek and that Ugaritic, which was discovered  in 1929, is an early form of Hebrew.

Given that information and the constraints imposed by linguistic evolution, Luo and co’s machine is able to translate both languages with remarkable accuracy. “We were able to correctly translate 67.3% of Linear B cognates into their Greek equivalents in the decipherment scenario,” they say. “To the best of our knowledge, our experiment is the first attempt of deciphering Linear B automatically.”

That’s impressive work that takes machine translation to a new level. But it also raises the interesting question of other lost languages—particularly those that have never been deciphered, such as Linear A.

In this paper, Linear A is conspicuous by its absence. Luo and co do not even mention it, but it must loom large in their thinking, as it does for all linguists. Yet significant breakthroughs are still needed before this script becomes amenable to machine translation.

For example, nobody knows what language Linear A encodes. Attempts to decipher it into ancient Greek have all failed. And without the progenitor language, the new technique does not work.

But the big advantage of machine-based approaches is that they can test one language after another quickly without becoming fatigued. So it’s quite possible that Luo and co might tackle Linear A with a brute-force approach—simply attempt to decipher it into every language for which machine translation already operates.

 

Source: Machine learning has been used to automatically translate long-lost languages – MIT Technology Review

Bulb smart meters in England wake up from comas miraculously speaking fluent Welsh

Smart meters in England are suddenly switching to Welsh language displays, much to the confusion of owners.

Several people report that the meters, made by energy provider Bulb, are spontaneously opting for Welsh instead of English, sometimes after freezing and being restarted. This would be unhelpful even for many residents of Wales, but the problem has been seen as far east as West Sussex.

The issue is fixable, although choosing the right options is easier if you speak a bit of Welsh. Anyone remember the fun of switching your mate’s Nokia to Finnish language menus?

This seems to be the latest in a string of issues suffered by Bulb, although to be fair the firm is not the first to be stumped by the stupidity of smart meters.

Last month it updated customers who were having problems with the meters’ “In-Home Display” – a small screen connected to the meter that is meant to show electricity usage and costs. Bulb now reckons 85 per cent of these devices will link to the meter immediately: “And the majority of those that don’t connect first time can now be fixed remotely.”

It is also dealing with a problem of automatic, monthly readings not appearing on accounts by taking daily readings, which apparently have a different process.

Source: Bulb smart meters in England wake up from comas miraculously speaking fluent Welsh • The Register

Evite Invites Over 100 Million People to Their Data Breach – with cleartext passwords

“In April 2019, the social planning website for managing online invitations Evite identified a data breach of their systems. Upon investigation, they found unauthorised access to a database archive dating back to 2013. The exposed data included a total of 101 million unique email addresses, most belonging to recipients of invitations. Members of the service also had names, phone numbers, physical addresses, dates of birth, genders and passwords stored in plain text exposed. The data was provided to HIBP by a source who requested it be attributed to “JimScott.Sec@protonmail.com”.”

Source: Evite Invites Over 100 Million People to Their Data Breach

It’s 2019 and people still store personal information in plain text?!

Search for them in your emailbox – you may have received evites from others instead of having made an account, in which case you are also in the data breach

Good luck deleting someone’s private info from a trained neural network – it’s likely to bork the whole thing

AI systems have weird memories. The machines desperately cling onto the data they’ve been trained on, making it difficult to delete bits of it. In fact, they often have to be completely retrained from scratch with the newer, smaller dataset.

That’s no good in an age where individuals can request their personal data be removed from company databases under the EU GDPR rules. How do you remove a person’s data from a machine learning that has already been trained? A 2017 research paper by law and policy academics hinted that it may even be impossible.

“Deletion is difficult because most machine learning models are complex black boxes so it is not clear how a data point or a set of data point is really being used,” James Zou, an assistant professor of biomedical data science at Stanford University, told The Register.

In order to leave out specific data, models will often have to be retrained with the newer, smaller dataset. That’s a pain as it costs money and time.

The research, led by Antonio Ginart, a PhD student at Stanford University, studied the problem of trying to delete data in machine learning models and managed to craft two “provably deletion efficient algorithms” to remove data across six different datasets for k-means clustering models, a machine learning method to develop classifiers. The results have been released in a paper in arXiv this week.

The trick is to assess the impacts of deleting data from a trained model. In some cases, it can lead to a decrease in the system’s performance.

“First, quickly check to see if deleting a data point would have any effect on the machine learning model at all – there are settings where there’s no effect and so we can perform this check very efficiently. Second, see if the data to be deleted only affects some local component of the learning system and just update locally,” Zou explained.

It seems to work okay for k-means clustering models under certain circumstances, when the data can be more easily separated. But when it comes to systems that aren’t deterministic like modern deep learning models, it’s incredibly difficult to delete data.

Zou said it isn’t entirely impossible, however. “We don’t have tools just yet but we are hoping to develop these deletion tools in the next few months.” ®

Source: Good luck deleting someone’s private info from a trained neural network – it’s likely to bork the whole thing • The Register

Galileo Satellite Positioning Service Outage

The Galileo satellite positioning service is currently unavailable, with all satellites marked as in outage . Galileo is the European-built and operated alternative to GPS. The outage is being attributed to problems at the Precise Timing Facility in Italy. The availability of multiple Global Navigation Satellite Systems (GNSS) and the relative newness of Galileo (the system is still under construction and only the newest GNSS receivers will track it) means that it is likely that few users will see an impact but the problem highlights our potential vulnerability to the loss of positioning and timing services available through GNSS.

Source: Galileo Satellite Positioning Service Outage – Slashdot

Microsoft Office 365: Banned in German schools over privacy fears

Schools in the central German state of Hesse have been have been told it’s now illegal to use Microsoft Office 365.

The state’s data-protection commissioner has ruled that using the popular cloud platform’s standard configuration exposes personal information about students and teachers “to possible access by US officials”.

That might sound like just another instance of European concerns about data privacy or worries about the current US administration’s foreign policy.

But in fact the ruling by the Hesse Office for Data Protection and Information Freedom is the result of several years of domestic debate about whether German schools and other state institutions should be using Microsoft software at all.

Besides the details that German users provide when they’re working with the platform, Microsoft Office 365 also transmits telemetry data back to the US.

Last year, investigators in the Netherlands discovered that that data could include anything from standard software diagnostics to user content from inside applications, such as sentences from documents and email subject lines. All of which contravenes the EU’s General Data Protection Regulation, or GDPR, the Dutch said.

Germany’s own Federal Office for Information Security also recently expressed concerns about telemetry data that the Windows operating system sends.

To allay privacy fears in Germany, Microsoft invested millions in a German cloud service, and in 2017 Hesse authorities said local schools could use Office 365. If German data remained in the country, that was fine, Hesse’s data privacy commissioner, Michael Ronellenfitsch, said.

But in August 2018 Microsoft decided to shut down the German service. So once again, data from local Office 365 users would be data transmitted over the Atlantic. Several US laws, including 2018’s CLOUD Act and 2015’s USA Freedom Act, give the US government more rights to ask for data from tech companies.

It’s actually simple, Austrian digital-rights advocate Max Schrems, who took a case on data transfers between the EU and US to the highest European court this week, tells ZDNet.

School pupils are usually not able to give consent, he points out. “And if data is sent to Microsoft in the US, it is subject to US mass-surveillance laws. This is illegal under EU law.”

Source: Microsoft Office 365: Banned in German schools over privacy fears | ZDNet

Microsoft tells resellers: ‘We listened to you, and we have acted’ (PS: Plz keep making us money)

Faced with continued rumbles of discontent from its reseller network on the eve of its Inspire conference, Microsoft has climbed down from plans to pull free software licences from its channel chums.

Doubtless fearful of a keynote sabotaged by a baying mob of angry resellers, Microsoft corporate veep for commercial partners Gavriella Schuster was tasked with the job of backing down.

Thanking its besuited middlemen and woman for “sharing your feedback with us”, Schuster confirmed the kindly corporation had “made the decision to roll back all planned changes related to internal use rights and competency timelines”.

So that 1 July 2020 retirement of the internal use rights? Not going to happen. For now.

Schuster blustered that “a thorough review” had taken place over the, er, days since the company dispensed the bad news and said: “We listened to you, and we have acted.”

The veep sadly missed out the words: “We looked at what annoying those who sell our stuff would do to our bottom line” in the latter comment. Fixed it for you.

Source: Microsoft tells resellers: ‘We listened to you, and we have acted’ (PS: Plz keep making us money) • The Register

Bitpoint cryptocurrency exchange hacked for $32 million

Japan-based cryptocurrency exchange Bitpoint announced it lost 3.5 billion yen (roughly $32 million) worth of cryptocurrency assets after a hack that happened late yesterday, July 11.

The exchange suspended all deposits and withdrawals this morning to investigate the hack, it said in a press release.

Thoroughly compromised

In a more detailed document released by RemixPoint, the legal entity behind Bitpoint, the company said that hackers stole funds from both of its “hot” and “cold” wallets. This suggests the exchange’s network was thoroughly compromised.

Hot wallets are used to store funds for current transactions, while the cold wallets are offline devices storing emergency and long-term funds.

Bitpoint reported the attackers stole funds in five cryptocurrencies, including Bitcoin, Bitcoin Cash, Litecoin, Ripple, and Ethereal.

The exchange said it detected the hack because of errors related to the remittance of Ripple funds to customers. Twenty-seven minutes after detecting the errors, Bitpoint admins realized they had been hacked, and three hours later, they discovered thefts from other cryptocurrency assets.

Another three and a half hours later, after a meeting with management, the exchange shut down, and law enforcement notified.

Two-third of stolen funds belonged to customers

The exchange also said that 2.5 billion yen ($23 million) of the total 3.5 billion yen ($32 million) that were stolen were customer funds, while the rest were funds owned by the exchange itself, as reserve funds and profits from past activity.

Source: Bitpoint cryptocurrency exchange hacked for $32 million | ZDNet

FTC Fines Facebook $5 Billion for Cambridge Analytica – not  very much considering earnings – and does not curtail future breaches

The Federal Trade Commission, which has been investigating Facebook in the wake of its massive Cambridge Analytica scandal, has voted to approve levying a massive $5 billion fine against the social media giant, according to reporting in both the Wall Street Journal and the Washington Post. It’s the single largest fine against a tech company by the FTC to date, but its inadequacy to curtail future breaches of this sort already has progressive lawmakers furious

Facebook was aware of a fine of this magnitude potentially coming down the pike for some time, and braced for a hit between $3 billion and $5 billion. The approval vote—which reportedly split down party lines, with three Republicans voting in favor and two Democrats against—was on the higher end of the expected spectrum.

This is expected to cap the agency’s investigation into the data-mining scandal that compromised up to 87 million Facebook users’ personal data. The data was originally harvested using a seemingly benign quiz app on the platform but was later potentially used by Cambridge Analytica, a political consultancy, for the unrelated purpose of political ad targeting.

[…]

While massive by the standards of tech companies, which too frequently get off with a slap on the wrist of lax data privacy practices which endanger users, the FTC’s fine still represents less than a third of the company’s $15.08 billion earnings from just the first quarter of this year.

Source: FTC Fines Facebook $5 Billion, Democrats Call It a Failure

Palantir’s Top-Secret User Manual for Cops shows how easily they can find scary amounts of information on you and your friends

Through a public record request, Motherboard has obtained a user manual that gives unprecedented insight into Palantir Gotham (Palantir’s other services, Palantir Foundry, is an enterprise data platform), which is used by law enforcement agencies like the Northern California Regional Intelligence Center. The NCRIC serves around 300 communities in northern California and is what is known as a “fusion center,” a Department of Homeland Security intelligence center that aggregates and investigates information from state, local, and federal agencies, as well as some private entities, into large databases that can be searched using software like Palantir.

Fusion centers have become a target of civil liberties groups in part because they collect and aggregate data from so many different public and private entities. The US Department of Justice’s Fusion Center Guidelines list the following as collection targets:

1562941666896-Screen-Shot-2019-07-12-at-102230-AM
Data via US Department of Justice. Chart via Electronic Information Privacy Center.
1562940862696-Screen-Shot-2019-07-12-at-101110-AM
A flow chart that explains how cops can begin to search for records relating to a single person.

The guide doesn’t just show how Gotham works. It also shows how police are instructed to use the software. This guide seems to be specifically made by Palantir for the California law enforcement because it includes examples specific to California. We don’t know exactly what information is excluded, or what changes have been made since the document was first created. The first eight pages that we received in response to our request is undated, but the remaining twenty-one pages were copyrighted in 2016. (Palantir did not respond to multiple requests for comment.)

The Palantir user guide shows that police can start with almost no information about a person of interest and instantly know extremely intimate details about their lives. The capabilities are staggering, according to the guide:

  • If police have a name that’s associated with a license plate, they can use automatic license plate reader data to find out where they’ve been, and when they’ve been there. This can give a complete account of where someone has driven over any time period.
  • With a name, police can also find a person’s email address, phone numbers, current and previous addresses, bank accounts, social security number(s), business relationships, family relationships, and license information like height, weight, and eye color, as long as it’s in the agency’s database.
  • The software can map out a person’s family members and business associates of a suspect, and theoretically, find the above information about them, too.

All of this information is aggregated and synthesized in a way that gives law enforcement nearly omniscient knowledge over any suspect they decide to surveil.

[…]

In order for Palantir to work, it has to be fed data. This can mean public records like business registries, birth certificates, and marriage records, or police records like warrants and parole sheets. Palantir would need other data sources to give police access to information like emails and bank account numbers.

“Palantir Law Enforcement supports existing case management systems, evidence management systems, arrest records, warrant data, subpoenaed data, RMS or other crime-reporting data, Computer Aided Dispatch (CAD) data, federal repositories, gang intelligence, suspicious activity reports, Automated License Plate Reader (ALPR) data, and unstructured data such as document repositories and emails,” Palantir’s website says.

Some data sources—like marriage, divorce, birth, and business records—also implicate other people that are associated with a person personally or through family. So when police are investigating a person, they’re not just collecting a dragnet of emails, phone numbers, business relationships, travel histories, etc. about one suspect. They’re also collecting information for people who are associated with this suspect.

Source: Revealed: This Is Palantir’s Top-Secret User Manual for Cops – VICE

It turns out Bystanders do Help Strangers in Need

Research dating back to the late 1960s documents how the great majority of people who witness crimes or violent behavior refuse to intervene.

Psychologists dubbed this non-response as the “bystander effect”—a phenomenon which has been replicated in scores of subsequent psychological studies. The “bystander effect” holds that the reason people don’t intervene is because we look to one another. The presence of many bystanders diffuses our own sense of personal responsibility, leading people to essentially do nothing and wait for someone else to jump in.

Past studies have used police reports to estimate the effect, but results ranged from 11 percent to 74 percent of incidents being interventions. Now, widespread surveillance cameras allow for a new method to assess real-life human interactions. A new study published this year in the American Psychologist finds that this well-established bystander effect may largely be a myth. The study uses footage of more than 200 incidents from surveillance cameras in Amsterdam; Cape Town; and Lancaster, England.

Researchers watched footage and coded the nature of the conflict, the number of direct participants in it, and the number of bystanders. Bystanders were defined as intervening if they attempted a variety of acts, including pacifying gestures, calming touches, blocking contact between parties, consoling victims of aggression, providing practical help to a physical harmed victim, or holding, pushing, or pulling an aggressor away. Each event had an average of 16 bystanders and lasted slightly more than three minutes.

The study finds that in nine out of 10 incidents, at least one bystander intervened, with an average of 3.8 interveners. There was also no significant difference across the three countries and cities, even though they differ greatly in levels of crime and violence.

Instead of more bystanders creating an immobilizing “bystander effect,” the study actually found the more bystanders there were, the more likely it was that at least someone would intervene to help. This is a powerful corrective to the common perception of “stranger danger” and the “unknown other.” It suggests that people are willing to self-police to protect their communities and others. That’s in line with the research of urban criminologist Patrick Sharkey, who finds that stronger neighborhood organizations, not a higher quantity of policing, have fueled the Great Crime Decline.

Source: How Often Will Bystanders Help Strangers in Need? – CityLab

Carbon nanotube device channels heat into light, could increase solar panel efficiency

The ever-more-humble carbon nanotube may be just the device to make solar panels—and anything else that loses energy through heat—far more efficient.

Rice University scientists are designing arrays of aligned single-wall carbon to channel mid- (aka heat) and greatly raise the efficiency of solar energy systems.

Gururaj Naik and Junichiro Kono of Rice’s Brown School of Engineering introduced their technology in ACS Photonics.

Their invention is a hyperbolic thermal emitter that can absorb intense heat that would otherwise be spewed into the atmosphere, squeeze it into a narrow bandwidth and emit it as light that can be turned into electricity.

The discovery rests on another by Kono’s group in 2016 when it found a simple method to make highly aligned, wafer-scale films of closely packed nanotubes.

[…]

The aligned nanotube films are conduits that absorb and turn it into narrow-bandwidth photons. Because electrons in nanotubes can only travel in one direction, the aligned films are metallic in that direction while insulating in the perpendicular direction, an effect Naik called hyperbolic dispersion. Thermal photons can strike the film from any direction, but can only leave via one.

“Instead of going from heat directly to electricity, we go from to light to electricity,” Naik said. “It seems like two stages would be more efficient than three, but here, that’s not the case.”

[…]

Naik said adding the emitters to standard solar cells could boost their efficiency from the current peak of about 22%. “By squeezing all the wasted thermal energy into a small spectral region, we can turn it into electricity very efficiently,” he said. “The theoretical prediction is that we can get 80% efficiency.”

Nanotube films suit the task because they stand up to temperatures as high as 1,700 degrees Celsius (3,092 degrees Fahrenheit). Naik’s team built proof-of-concept devices that allowed them to operate at up to 700 C (1,292 F) and confirm their narrow-band output. To make them, the team patterned arrays of submicron-scale cavities into the chip-sized films.

Source: Carbon nanotube device channels heat into light

Twitter is back after a brief outage

Twitter is back online for some people after being down for an hour or so Thursday afternoon. Tweets weren’t loading in the app or on desktop for several Engadget editors, while Down Detector had a massive spike in outage reports.

Twitter outage

Twitter said the outage was due to “an internal system change” and it’s fixing the issue. It said everything should be up and running again soon.

Source: Twitter is back after a brief outage (updated)

Reddit Is Down as the Summer of Outages Continues

Users began to report outages a little over an hour ago. For this writer, the problem first presented as weirdness with Reddit’s login server and front page timeline, but it quickly worsened. Now navigating to reddit.com is rewarding many with 503 errors.

The outage seems to have hit users visiting Reddit on desktop the hardest. Navigating to Reddit through its app on Android and iOS worked just fine for several Gizmodo staffers, and even the Reddit’s status page claims all systems are operational, though it is showing a sharp uptick in error rates for reddit.com.

Screenshot: Reddit Status Detector

If you feel like the internet has been breaking more than usual, you’re not alone. There have been a number of significant outages over the last month.

Google has had at least two major outages, as has Facebook. AT&T also experienced a major outage this month. Hell, even Down Detector has been down.

Source: Reddit Is Down as the Summer of Outages Continues

Microsoft stirs suspicions by adding telemetry spyware to security-only update

Under Microsoft’s rules, what it calls “Security-only updates” are supposed to include, well, only security updates, not quality fixes or diagnostic tools. Nearly three years ago, Microsoft split its monthly update packages for Windows 7 and Windows 8.1 into two distinct offerings: a monthly rollup of updates and fixes and, for those who are want only those patches that are absolutely essential, a Security-only update package.

What was surprising about this month’s Security-only update, formally titled the “July 9, 2019—KB4507456 (Security-only update),” is that it bundled the Compatibility Appraiser, KB2952664, which is designed to identify issues that could prevent a Windows 7 PC from updating to Windows 10.

Among the fierce corps of Windows Update skeptics, the Compatibility Appraiser tool is to be shunned aggressively. The concern is that these components are being used to prepare for another round of forced updates or to spy on individual PCs. The word telemetry appears in at least one file, and for some observers it’s a short step from seemingly innocuous data collection to outright spyware.

My longtime colleague and erstwhile co-author, Woody Leonhard, noted earlier today that Microsoft appeared to be “surreptitiously adding telemetry functionality” to the latest update:

With the July 2019-07 Security Only Quality Update KB4507456, Microsoft has slipped this functionality into a security-only patch without any warning, thus adding the “Compatibility Appraiser” and its scheduled tasks (telemetry) to the update. The package details for KB4507456 say it replaces KB2952664 (among other updates).

Come on Microsoft. This is not a security-only update. How do you justify this sneaky behavior? Where is the transparency now.

I had the same question, so I spent the afternoon poking through update files and security bulletins and trying to get an on-the-record response from Microsoft. I got a terse “no comment” from Redmond.

Source: Microsoft stirs suspicions by adding telemetry files to security-only update | ZDNet

Once installed, a new scheduled task is added to the system under Microsoft > Windows > Application Experience

Windows 10 SFC /scannow Can’t Fix Corrupted Files After Update

Starting today, Windows 10 users are finding that the /sfc scannow feature is no longer working and that it states it found, but could not fix, corrupted Windows Defender PowerShell files.

The Windows System File Checker tool, commonly known as SFC, has a /scannow argument that will check the integrity of all protected Winodws system files and repair any issues that are found.

As of this morning, users in a wildersecurity.com thread have started reporting that when they run sfc /scannow, the program is stating that “Windows Resource Protection found corrupt files but was unable to fix some of them.” I too was able to reproduce this issue on a virtual machine with Windows Defender configured as the main antivirus program.

Source: Windows 10 SFC /scannow Can’t Fix Corrupted Files After Update

Apple removes Zoom’s dodgy hidden web server on your Mac without telling you – shows who really pwns your machine

Apple has pushed a silent update to Macs, disabling the hidden web server installed by the popular Zoom web-conferencing software.

A security researcher this week went public with his finding that the mechanism used to bypass a Safari prompt before entering a Zoom conference was a hidden local web server.

Jonathan Leitschuh focused largely on the fact that a user’s webcam would likely be ON automatically, meaning that a crafty bit of web coding would give an attacker a peek into your room if you simply visit their site.

But the presence of the web server was a more serious issue, especially since uninstalling Zoom did not remove it and the web server would reinstall the Zoom client – which is malware-like behaviour.

[…]

On 9 July the company updated its Mac app to remove the local web server “via a prompted update”.

The next day Apple itself took action, by instructing macOS’s built-in antivirus engine to remove the web server on sight from Macs. Zoom CEO Eric Yuan added on Wednesday:

Apple issued an update to ensure the Zoom web server is removed from all Macs, even if the user did not update their Zoom app or deleted it before we issued our July 9 patch. Zoom worked with Apple to test this update, which requires no user interaction.

Source: Wondering how to whack Zoom’s dodgy hidden web server on your Mac? No worries, Apple’s done it for you • The Register

Kind of scary that Apple can just go about removing software from your machine without any notification

Apple disables Walkie Talkie app due to vulnerability that could allow iPhone eavesdropping

Apple has disabled the Apple Watch Walkie Talkie app due to an unspecified vulnerability that could allow a person to listen to another customer’s iPhone without consent, the company told TechCrunch this evening.

Apple has apologized for the bug and for the inconvenience of being unable to use the feature while a fix is made.

[…]

Earlier this year a bug was discovered in the group calling feature of FaceTime that allowed people to listen in before a call was accepted. It turned out that the teen who discovered the bug, Grant Thompson, had attempted to contact Apple about the issue but was unable to get a response. Apple fixed the bug and eventually rewarded Thompson a bug bounty. This time around, Apple appears to be listening more closely to the reports that come in via its vulnerability tips line and has disabled the feature.

Earlier today, Apple quietly pushed a Mac update to remove a feature of the Zoom conference app that allowed it to work around Mac restrictions to provide a smoother call initiation experience — but that also allowed emails and websites to add a user to an active video call without their permission.

Source: Apple disables Walkie Talkie app due to vulnerability that could allow iPhone eavesdropping | TechCrunch

‘Superhuman’ AI Crushes Poker Pros at Six-Player Texas Hold’em

Computer scientists have developed a card-playing bot, called Pluribus, capable of defeating some of the world’s best players at six-person no-limit Texas hold’em poker, in what’s considered an important breakthrough in artificial intelligence.

Two years ago, a research team from Carnegie Mellon University developed a similar poker-playing system, called Libratus, which consistently defeated the world’s best players at one-on-one Heads-Up, No-Limit Texas Hold’em poker. The creators of Libratus, Tuomas Sandholm and Noam Brown, have now upped the stakes, unveiling a new system capable of playing six-player no-limit Texas hold’em poker, a wildly popular version of the game.

In a series of contests, Pluribus handedly defeated its professional human opponents, at a level the researchers described as “superhuman.” When pitted against professional human opponents with real money involved, Pluribus managed to collect winnings at an astounding rate of $1,000 per hour. Details of this achievement were published today in Science.

[…]

For the new study, Brown and Sandholm subjected Pluribus to two challenging tests. The first pitted Pluribus against 13 different professional players—all of whom have earned more than $1 million in poker winnings—in the six-player version of the game. The second test involved matches featuring two poker legends, Darren Elia and Chris “Jesus” Ferguson, each of whom was pitted against five identical copies of Pluribus.

The matches with five humans and Pluribus involved 10,000 hands played over 12 days. To incentivize the human players, a total of $50,000 was distributed among the participants, Pluribus included. The games were blind in that none of the human players were told who they were playing, though each player had a consistent alias used throughout the competition. For the tests involving a lone human and five Pluribuses, each player was given $2,000 for participating and a bonus $2,000 for playing better than their human cohort. Elia and Ferguson both played 5,000 separate hands against their machine opponents.

In all scenarios, Pluribus registered wins with “statistical significance,” and to a degree the researchers referred to as “superhuman.”

“We mean superhuman in the sense that it performs better than the best humans,” said Brown, who is completing his Ph.D. as a research scientist at Facebook AI. “The bot won by about five big blinds per hundred hands of poker (bb/100) when playing against five elite human professionals, which professionals consider to be a very high win rate. To beat elite professionals by that margin is considered a decisive win.

[…]

Before the competition started, Pluribus developed its own “blueprint” strategy, which it did by playing poker with itself for eight straight days.

“Pluribus does not use any human gameplay data to form its strategy,” explained Brown. “Instead, Pluribus first uses self-play, in which it plays against itself over trillions of hands to formulate a basic strategy. It starts by playing completely randomly. As it plays more and more hands against itself, its strategy gradually improves as it learns which actions lead to winning more money. This is all done offline before ever playing against humans.”

Armed with its blueprint strategy, the competitions could begin. After the first bets were placed, Pluribus calculated several possible next moves for each opponent, in a manner similar to how machines play chess and Go. The difference here, however, is that Pluribus was not tasked to calculate the entire game, as that would be “computationally prohibitive,” as noted by the researchers.

“In Pluribus, we used a new way of doing search that doesn’t have to search all the way to the end of the game,” said Brown. “Instead, it can stop after a few moves. This makes the search algorithm much more scalable. In particular, it allows us to reach superhuman performance while only training for the equivalent of less than $150 on a cloud computing service, and playing in real time on just two CPUs.”

[…]

Importantly, Pluribus was also programmed to be unpredictable—a fundamental aspect of good poker gamesmanship. If Pluribus consistently bet tons of money when it figured it had the best hand, for example, its opponents would eventually catch on. To remedy this, the system was programmed to play in a “balanced” manner, employing a set of strategies, like bluffing, that prevented Pluribus’ opponents from picking up on its tendencies and habits.

Source: ‘Superhuman’ AI Crushes Poker Pros at Six-Player Texas Hold’em

Google admits leaked private voice conversations, decides to clamp down on whistleblowers, not improve privacy

Google admitted on Thursday that more than 1,000 sound recordings of customer conversations with the Google Assistant were leaked by some of its partners to a Belgian news site.

[…]

“We just learned that one of these language reviewers has violated our data security policies by leaking confidential Dutch audio data,” Google product manager of search David Monsees said in a blog post. “Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again”

Monsees said its partners only listen to “around 0.2 percent of all audio snippets” and said they are “not associated with user accounts,” even though VRT was able to figure out who was speaking in some of the clips.

Source: Google admits leaked private voice conversations

NB the CNBC  article states that you can delete old conversations, but we know that’s not the case for transcribed Alexa conversations and we know that if you delete your shopping emails from Gmail, Google keeps your shopping history.

How American Corporations Are Policing Online Speech Worldwide

In the winter of 2010, a 19-year-old Moroccan man named Kacem Ghazzali logged into his email to find a message from Facebook informing him that a group he had created just a few days prior had been removed from the platform without explanation. The group, entitled “Jeunes pour la séparation entre Religion et Enseignement” (or “Youth for the separation of religion and education”), was an attempt by Ghazzali to organize with other secularist youth in the pious North African kingdom, but it was quickly thwarted. When Ghazzali wrote to Facebook to complain about the censorship, he found his personal profile taken down as well.

Back then, there was no appeals system, but after I wrote about the story, Ghazzali was able to get his accounts back. Others haven’t been so lucky. In the years since, I’ve heard from hundreds of activists, artists, and average folks who found their social media posts or accounts deleted—sometimes for violating some arcane proprietary rule, sometimes at the order of a government or court, other times for no discernible reason at all.

The architects of Silicon Valley’s big social media platforms never imagined they’d someday be the global speech police. And yet, as their market share and global user bases have increased over the years, that’s exactly what they’ve become. Today, the number of people who tweet is nearly the population of the United States. About a quarter of the internet’s total users watch YouTube videos, and nearly one-third of the entire world uses Facebook. Regardless of the intent of their founders, none of these platforms were ever merely a means of connecting people; from their early days, they fulfilled greater needs. They are the newspaper, the marketplace, the television. They are the billboard, the community newsletter, and the town square.

And yet, they are corporations, with their own speech rights and ability to set the rules as they like—rules that more often than not reflect the beliefs, however misguided, of their founders.

Source: How American Corporations Are Policing Online Speech Worldwide

T-Mobile Says Customers Can’t Sue Because It Violates Its ToS

T-Mobile screwed over millions of customers when it collected their geolocation data and sold it to third parties without their consent. Now, two of these customers are trying to pursue a class-action lawsuit against the company for the shady practice, but the telecom giant is using another shady practice to force them to settle their dispute behind closed doors.

On Monday, T-Mobile filed a motion to compel the plaintiffs into arbitration, which would keep the complaint out of a public courtroom. See, when you sign a contract or agree to a company’s terms of service with a forced arbitration clause, you are waiving your right to a trial by jury and oftentimes to pursue a class-action lawsuit at all. Settling a dispute in arbitration means having it heard by a third party behind closed doors. And an arbitration clause is buried in T-Mobile’s fine print.

T-Mobile’s terms of service state that customers do have the option to opt out of arbitration, which is buried within the agreement and states that they “must either complete the opt out form on this website or call toll-free 1-866-323-4405 and provide the information requested.” They also only have 30 days to do so after they have activated their service. After that brief time period, users are no longer eligible to opt out.

The plaintiffs, Shawnay Ray and Kantice Joyner of Maryland, filed the class-action complaint against T-Mobile in May. Verizon, Sprint, and AT&T were all also hit with lawsuits that same month for selling customer location data. “The telecommunications carriers are the beginning of a dizzying chain of data selling, where data goes from company to company, and ultimately ends up in the hands of literally anybody who is looking,” the complaint against T-Mobile states. The comment is largely referring to a Vice investigation that found that the phone carriers sold real-time location data to middlemen and that this data sometimes eventually ended up with bounty hunters.

Source: T-Mobile Says Customers Can’t Sue Because It Violates Its ToS

Google contractors are secretly listening to your Assistant and Home recordings

Not only is your Google Home device listening to you, a new report suggests there might be a Google contractor who’s listening as well. Even if you didn’t ask your device any questions, it’s still sending what you say to the company, who allow an actual person to collect data from it.

[…]

VRT, with the help of a whistleblower, was able to listen to some of these clips and subsequently heard enough to discern the addresses of several Dutch and Belgian people using Google Home — in spite of the fact some hadn’t even uttered the words “Hey Google,” which are supposed to be the device’s listening trigger.

The person who leaked the recordings was working as a subcontractor to Google, transcribing the audio files for subsequent use in improving its speech recognition. They got in touch with VRT after reading about Amazon Alexa keeping recordings indefinitely.

According to the whistleblower, the recordings presented to them are meant to be carefully annotated, with notes included about the speakers presumed identity and age. From the sound of the report, these transcribers have heard just about everything. Personal information? Bedroom activities? Domestic violence? Yes, yes, and yes.

While VRT only listened to recordings from Dutch and Belgian users, the platform the whistleblower showed them had recordings from all over the world – which means there are probably thousands of other contractors listening to Assistant recordings.

The VRT report states that the Google Home Terms of Service don’t mention that recordings might be listened to by other humans.

The report did say the company tries to anonymize the recordings before sending them to contractors, identifying them by numbers rather than user names. But again, VRT was able to pick up enough data from the recordings to find the addresses of the users in question, and even confront some of the users in the recordings – to their great dismay.

Google’s defense to VRT was that the company only transcribes and uses “about 0.2% of all audio clips,” to improve their voice recognition technology.

Source: Google contractors are secretly listening to your Assistant recordings

Prenda Law bosses in jail for seeding porn videos to d/l sites and then suing the downloaders

One of the former attorneys behind dodgy copyright-demand factory Prenda Law has been sentenced to 60 months in prison. Yes, the same Prenda Law that seeded file-sharing networks with smut flicks it owned the rights to in order to extract eye-watering copyright infringement settlements from downloaders.

Judge Joan Ericksen, of a US federal district court in Minnesota, on Tuesday this week handed down the five-year term, along with two years of supervised release and a $1,541,527.37 restitution bill, after Steele copped to one count each of conspiracy to commit money laundering and conspiracy to commit mail and wire fraud. While technically given two 60-month sentences, Steele, 48, is being allowed to serve both terms at the same time.

Steele, who has since been disbarred, admitted that from 2011 to 2014 he and co-conspirator Paul Hansmeier, operating as Prenda Law, set up a series of shell companies and studios that either purchased the rights to existing pornographic films or funded the making of original films with the intent of anonymously sticking the dirty movies on the Pirate Bay.

The duo then tracked down people who had downloaded the films and threatened them with copyright infringement suits unless the target agreed to pay out a $3,000 settlement. When the piracy scam started to flounder, the pair took things a step further by accusing targets of hacking their shell companies’ machines.

“To facilitate their phony ‘hacking’ lawsuits, the defendants recruited individuals who had been caught downloading pornography from a file-sharing website, to act as ruse ‘defendants’,” US prosecutors noted.

“These ruse defendants agreed to be sued and permit Steele and Hansmeier to conduct early discovery against their supposed ‘co-conspirators’ in exchange for Steele and Hansmeier waiving their settlement fees.”

Both lawyers would eventually be found out, and charged with fraud and money laundering for their roles in the scheme. By the time the operation was dismantled, it is estimated the duo was able to extort nearly $3m in payouts from randy web-surfers.

While five years behind bars can hardly be considered a slap on the wrist, Steele’s willingness to cooperate with authorities allowed him to win a considerably lighter term than his co-conspirator 37-year-old Hansmeier, who last month was sentenced to 14 years incarceration for convictions on the same set of charges. ®

Source: Prenda Law boss John Steele to miss 2020 Olympics… unless they show it in prison • The Register