It’s been about a year since users of Canadian cryptocurrency exchange QuadrigaCX were informed that the company’s CEO unexpectedly died, taking the password that accessed most the money from their accounts with him to the grave. And now, those clients want to know what’s inside that grave.
The majority of QuadrigaCX’s holdings were kept offline in “cold storage,” with a password known only by 30-year-old CEO Gerald Cotten. On January 14, the company posted a Facebook note announcing Cotten had died about month earlier “due to complications with Crohn’s disease” while on a trip to India “where he was opening an orphanage to provide a home and safe refuge for children in need.”
The news meant that 76,000 people lost cryptocurrency and cash that amounted to about $163 million USD, collectively, according to Bloomberg. The story became more suspicious in June when a bankruptcy monitor revealed that Cotten funneled most of the money into fraudulent accounts and spent much of it on his wife and himself. Growing skepticism around the mysterious death has driven lawyers representing Quadriga CX users to request that Cotten’s grave be exhumed.
On Friday, the Nova Scotia Supreme Court-appointed lawyers sent a letter asking Canadian police to conduct an autopsy on the body in Cotten’s grave “to confirm both its identity and the cause of death” citing the “questionable circumstances surrounding Mr. Cotten’s death” and “the need for certainty around the question of whether Mr. Cotten is in fact deceased.”
Richard Niedermayer, a lawyer representing Cotten’s wife Jennifer Robertson told the New York Times in an email that Robertson was “heartbroken to learn” about the exhumation request, adding that Cotten’s death “should not be in doubt.”
The QuadrigaCX users’ counsel is asking that the exhumation and autopsy happen by the Spring of 2020 due to “decomposition concerns.”
Stung by an article mulling Amazon Web Services’ market dominance on Monday, AWS VP Andi Gutmans fired back, complaining the reporter ignored flattering comments from AWS partners – and that “AWS is ‘strip-mining’ open source is silly and off-base.”
“The journalist largely ignores the many positive comments he got from partners because it’s not as salacious copy for him,” Gutmans said in a blog post, as if critical reporting carried with it an obligation to publish a specific quota of marketing copy.
And he insisted that Amazon “contributes mightily to open source projects,” and “AWS has not copied anybody’s software or services.”
In its recent lawsuit against AWS, open source biz Elastic, cited in the New York Times article and a business which is public in its disaffection with Amazon, did not accuse AWS of copying its open source search software – which anyone can copy by virtue of its open source license. Rather, the search biz objects to AWS’ use of its trademark in its Amazon Elasticsearch Service.
But others have been more cutting. Following AWS’ launch of DocumentDB, a cloud database compatible with the MongoDB API, CEO Dev Ittycheria suggested his company’s product had been imitated and copied.
Indeed, among startups like Confluent, Elastic, MongoDB, Neo4J, and Redis Labs that have been trying to turn open source projects into revenue-generating businesses, concern about AWS – and to a lesser extent Microsoft Azure and Google Cloud – is quite common.
In September, at the Open Core Summit, small companies aspiring to be big ones gathered to figure out how they might make a profit in the shadow of AWS and its peers. Worries about AWS have proven broad enough to attract the attention of the US Federal Trade Commission, said to be exploring a possible antitrust case against AWS.
Despite his dissatisfaction with insufficiently rosy AWS coverage, Gutmans has a point: IT customers want what AWS is offering and they are willing to pay for it, regardless of potential problems like vendor lock-in and unpredictable bills.
Yet in his criticism that open source companies see the market as “as a zero-sum game and want to be the only ones able to freely monetize managed services around these open source projects,” he fails to acknowledge that Amazon too takes steps to limit competition and that small firms might need a barrier to entry to convince investors that they can protect their autonomy and revenue stream. Partnering with AWS may be expedient, but that doesn’t give companies a defensible business.
It’s reasonable for companies to want to control their own destiny. But, as open source pioneer Bruce Perens put it in an interview earlier this year, “Open source does not guarantee that you can make money. And that’s the problem that Redis, MongoDB, etc. are all facing right now.”
Then the ads only showed for those who were not Office 365 subscribers, but on this occasion, they are present for everyone and appear non-removable.
The ads are not fixed – when you read your Gmail if offers to let you read your Gmail on mobile, and for Outlook.com accounts it offers the Outlook app for mobile.
Most annoyingly, the ads are still present, even if you use the Outlook app on mobile, and take up considerable vertical space in the menu.
When asked Microsoft said;
“The ads within the app itself will be displayed regardless of which email address you use it with. It is not removable, but you can submit it as a suggestion within the Feedback Hub on Windows 10 here: https://msft.it/6012TVPXG . “
Ads in Mail and Calendar app are of course not in and of themselves evil, but most people feel they have paid for the built-in software in Windows, such as the mail app, when they purchased the computer, and it appears the ads will show even if you use a non-Microsoft email provider.
A preponderance of weak keys is leaving IoT devices at risk of being hacked, and the problem won’t be an easy one to solve.
This was the conclusion reached by the team at security house Keyfactor, which analyzed a collection of 75 million RSA certificates gathered from the open internet and determined that number combinations were being repeated at a far greater rate than they should, meaning encrypted connections could possibly be broken by attackers who correctly guess a key.
Comparing the millions of keys on an Azure cloud instance, the team found common factors were used to generate keys at a rate of 1 in 172 (435,000 in total). By comparison, the team also analyzed 100 million certificates collected from the Certificate Transparency logs on desktops, where they found common factors in just five certificates, or a rate of 1 in 20 million.
The team believes that the reason for this poor entropy is down to IoT devices. Because the embedded gear is often based on very low-power hardware, the devices are unable to properly generate random numbers.
The result is keys that could be easier for an attacker to break, leaving the device and all of its users vulnerable.
“The widespread susceptibility of these IoT devices poses a potential risk to the public due to their presence in sensitive settings,” Keyfactor researchers Jonathan Kilgallin and Ross Vasko noted.
“We conclude that device manufacturers must ensure their devices have access to sufficient entropy and adhere to best practices in cryptography to protect consumers.”
ICANN is reviewing the pending sale of the .org domain manager from a nonprofit to a private equity firm and says it could try to block the transfer.
The .org domain is managed by the Public Internet Registry (PIR), which is a subsidiary of the Internet Society, a nonprofit. The Internet Society is trying to sell PIR to private equity firm Ethos Capital.
ICANN (Internet Corporation for Assigned Names and Numbers) said last week that it sent requests for information to PIR in order to determine whether the transfer should be allowed. “ICANN will thoroughly evaluate the responses, and then ICANN has 30 additional days to provide or withhold its consent to the request,” the organization said.
ICANN, which is also a nonprofit, previously told the Financial Times that it “does not have authority over the proposed acquisition,” making it seem like the sale was practically a done deal. But even that earlier statement gave ICANN some wiggle room. ICANN “said its job was simply to ‘assure the continued operation of the .org domain’—implying that it could only stop the sale if the stability and security of the domain-name infrastructure were at risk,” the Financial Times wrote on November 28.
In its newer statement last week, ICANN noted that the .org registry agreement between PIR and ICANN requires PIR to “obtain ICANN’s prior approval before any transaction that would result in a change of control of the registry operator.”
ICANN can raise “reasonable” objection
The registry agreement lets ICANN request transaction details “including information about the party acquiring control, its ultimate parent entity, and whether they meet the ICANN-adopted registry operator criteria (as well as financial resources, and operational and technical capabilities),” ICANN noted. ICANN’s 30-day review period begins after PIR provides those details.
Per the registry agreement, ICANN said it will apply “a standard of reasonableness” when determining whether to allow the change in control over the .org domain. As Domain Name Wire noted in a news story, whether ICANN can block the transfer using that standard “might ultimately have to be determined by the courts.”
The agreement between PIR and ICANN designates PIR as the registry operator for the .org top-level domain. It says that “neither party may assign any of its rights and obligations under this Agreement without the prior written approval of the other party, which approval will not be unreasonably withheld.”
Concern about price hikes, transparency
The pending sale comes a few months after ICANN approved a contract change that eliminates price caps on .org domain names. The sale has raised concerns that Ethos Capital could impose large price hikes.
Amazon.com is blocking its third-party sellers from using FedEx’s ground delivery network for Prime shipments, citing a decline in performance heading into the final stretch of the holiday shopping season. The ban on using FedEx’s Ground and Home services starts this week and will last “until the delivery performance of these ship methods improves,” according to an email Amazon sent Sunday to merchants that was reviewed by The Wall Street Journal. Amazon has stopped using FedEx for its own deliveries in the U.S., but third-party merchants had still been able to use FedEx. Such sellers now account for more than half of the merchandise sold on Amazon’s website, including many items listed as eligible for Prime.
FedEx said the decision impacts a small number of shippers but “limits the options for those small businesses on some of the highest shipping days in history.” The carrier said it still expects to handle a record number of packages this holiday season. “The overall impact to our business is minuscule,” a FedEx spokeswoman said. In its email to merchants, Amazon said sellers can use FedEx’s speedier and more expensive Express service for Prime orders or FedEx Ground for non-Prime shipments.
n the 19th and early 20th centuries, millions of weather observations were carefully made in the logbooks of ships sailing through largely uncharted waters. Written in pen and ink, the logs recorded barometric pressure, air temperature, ice conditions and other variables. Today, volunteers from a project called Old Weather are transcribing these observations, which are fed into a huge dataset at the National Oceanic and Atmospheric Administration. This “weather time machine,” as NOAA puts it, can estimate what the weather was for every day back to 1836, improving our understanding of extreme weather events and the impacts of climate change.
despite the fact that all the drivers generally have to do is simply sit on the internet, available when they’re necessary.
Apparently, that isn’t easy enough for Intel. Recently, the chipmaker took BIOS drivers, a boot-level firmware technology used for hardware initialization in earlier generations of PCs, for a number of its unsupported motherboards off its website, citing the fact that the programs have reached an “End of Life” status. While it reflects the fact that Unified Extensible Firmware Interface (UEFI), a later generation of firmware technology used in PCs and Macs, is expected to ultimately replace BIOS entirely, it also leaves lots of users with old gadgets out in a lurch. And as Bleeping Computer has noted, it appears to be part of a broader trend to prevent downloads for unsupported hardware on the Intel website—things that have long lived past their current lives. After all, if something goes wrong, Intel can be sure it’s not liable if a 15-year-old BIOS update borks a system.
In a comment to Motherboard, Intel characterized the approach to and timing of the removals as reflecting industry norms.
[…]
However, this is a problem for folks who take collecting or use of old technology seriously, such as those on the forum Vogons, which noticed the issue first, though it’s far from anything new. Technology companies come and go all the time, and as things like mergers and redesigns happen, often the software repository gets affected when the technology goes out of date.
A Problem For Consumers & Collectors
Jason Scott, the Internet Archive’s lead software curator, says that Intel’s decision to no longer provide old drivers on its website reflects a tendency by hardware and software developers to ignore their legacies when possible—particularly in the case of consumer software, rather than in the enterprise, where companies’ willingness to pay for updates ensures that needed updates won’t simply sit on the shelf.
[…]
By the mid-90s, companies started to create FTP repositories to distribute software, which had the effect of changing the nature of updates: When the internet made distribution easier and both innovation and security risks grew more advanced, technology companies updated their apps far more often.
FTP’s Pending Fadeout
Many of those FTP servers are still around today, but the news cycle offers a separate, equally disappointing piece of information for those looking for vintage drivers: Major web browsers are planning to sunset support for the FTP protocol. Chrome plans to remove support for FTP sites by version 82, which is currently in the development cycle and will hit sometime next year. And Firefox makers Mozilla have made rumblings about doing the same thing.
The reasons for doing so, often cited for similar removals of legacy features, come down to security. FTP is a legacy service that can’t be secured in much the same way that its successor, SFTP, can.
While FTP applications like CyberDuck will likely exist for decades from now, the disconnect from the web browser will make these servers a lot harder to use. The reason goes back to the fact that the FTP protocol isn’t inherently searchable—but the best way to find information about it is with a web-based search engine … such as Google.
[…]
Earlier this year, I was attempting to get a vintage webcam working, and while I was ultimately unable to get it to work, it wasn’t due to lack of software access. See, Logitech actually kept copies of Connectix’s old webcam software on its FTP site. This is software that hasn’t seen updates in more than 20 years; that only supports Windows 3.1, Windows NT, and Windows 95; and that wasn’t on Logitech’s website.
One has to wonder how soon those links will disappear from Google searches once the two most popular desktop browsers remove easy access to those files. And there’s no guarantee that a company is going to keep a server online beyond that point.
“It was just it was this weird experience that FTP sites, especially, could have an inertia of 15 to 20 years now, where they could be running all this time, untouched,” Scott added. “And just every time that, you know, if the machine dies, it goes away.”
A pentad of bit boffins have devised a way to integrate electronic objects into augmented reality applications using their existing visible light sources, like power lights and signal strength indicators, to transmit data.
In a recent research paper, “LightAnchors: Appropriating Point Lights for Spatially-Anchored Augmented Reality Interfaces,” Carnegie Mellon computer scientists Karan Ahuja, Sujeath Pareddy, Robert Xiao, Mayank Goel, and Chris Harrison describe a technique for fetching data from device LEDs and then using those lights as anchor points for overlaid augmented reality graphics.
As depicted in a video published earlier this week on YouTube, LightAnchors allow an augmented reality scene, displayed on a mobile phone, to incorporate data derived from an LED embedded in the real-world object being shown on screen. You can see it here.
Unlike various visual tagging schemes that have been employed for this purpose, like using stickers or QR codes to hold information, LightAnchors rely on existing object features (device LEDs) and can be dynamic, reading live information from LED modulations.
The reason to do so is that device LEDs can serve not only as a point to affix AR interface elements, but also as an output port for the binary data being translated into human-readable form in the on-screen UI.
“Many devices such as routers, thermostats, security cameras already have LEDs that are addressable,” Karan Ahuja, a doctoral student at the Human-Computer Interaction Institute in the School of Computer Science at Carnegie Mellon University told The Register.
“For devices such as glue guns and power strips, their LED can be co-opted with a very cheap micro-controller (less than US$1) to blink it at high frame rates.”
Verizon, which bought Yahoo In 2017, has suspended email addresses of archivists who are trying to preserve 20 years of content that will be deleted permanently in a few weeks.
The mass deletion includes files, polls, links, photos, folders, database, calendar, attachments, conversations, email updates, message digests, and message histories that was uploaded to Yahoo servers since pre-Google 1990s.
Verizon planned to allow users to download their own data from the site’s privacy dashboard, but apparently it has a problem with the work of The Archive Team who wants to save content to upload it to the non-profit Internet Archive, which runs the popular Wayback Machine site.
“Yahoo banned all the email addresses that the Archive Team volunteers had been using to join Yahoo Groups in order to download data,” reported the Yahoo Groups Archive Team.
“Verizon has also made it impossible for the Archive Team to continue using semi-automated scripts to join Yahoo Groups – which means each group must be rejoined one by one, an impossible task (redo the work of the past four weeks over the next 10 days).”
The Yahoo Groups Archive Team argues that it is facing a near total “80% loss of data” because Verizon is blocking the team members’ email accounts.
The Yahoo Groups site isn’t widely used today but it was in the past. The size of the archive that the group is trying to save is substantial and the group had saved about 1.8 billion messages as of late 2018.
According to the Archive Team: “As of 2019-10-16 the directory lists 5,619,351 groups. 2,752,112 of them have been discovered. 1,483,853 (54%) have public message archives with an estimated number of 2.1 billion messages (1,389 messages per group on average so far). 1.8 billion messages (86%) have been archived as of 2018-10-28.”
Verizon has issued a statement to the group supporting the Archive Team, telling concerned archivists that “the resources needed to maintain historical content from Yahoo Groups pages is cost-prohibitive, as they’re largely unused”.
The telecoms giant also said the people booted from the service had violated its terms of service and suggested the number of users affected was small.
“Regarding the 128 people who joined Yahoo Groups with the goal to archive them – are those people from Archiveteam.org? If so, their actions violated our Terms of Service. Because of this violation, we are unable reauthorize them,” Verizon said.
As reporters raced this summer to bring new details of Ring’s law enforcement contracts to light, the home security company, acquired last year by Amazon for a whopping $1 billion, strove to underscore the privacy it had pledged to provide users.
Even as its creeping objective of ensuring an ever-expanding network of home security devices eventually becomes indispensable to daily police work, Ring promised its customers would always have a choice in “what information, if any, they share with law enforcement.” While it quietly toiled to minimize what police officials could reveal about Ring’s police partnerships to the public, it vigorously reinforced its obligation to the privacy of its customers—and to the users of its crime-alert app, Neighbors.
However, a Gizmodo investigation, which began last month and ultimately revealed the potential locations of up to tens of thousands of Ring cameras, has cast new doubt on the effectiveness of the company’s privacy safeguards. It further offers one of the most “striking” and “disturbing” glimpses yet, privacy experts said, of Amazon’s privately run, omni-surveillance shroud that’s enveloping U.S. cities.
[…]
Gizmodo has acquired data over the past month connected to nearly 65,800 individual posts shared by users of the Neighbors app. The posts, which reach back 500 days from the point of collection, offer extraordinary insight into the proliferation of Ring video surveillance across American neighborhoods and raise important questions about the privacy trade-offs of a consumer-driven network of surveillance cameras controlled by one of the world’s most powerful corporations.
And not just for those whose faces have been recorded.
Examining the network traffic of the Neighbors app produced unexpected data, including hidden geographic coordinates that are connected to each post—latitude and longitude with up to six decimal points of precision, accurate enough to pinpoint roughly a square inch of ground.
[…]
Guariglia and other surveillance experts told Gizmodo that the ubiquity of the devices gives rise to fears that pedestrians are being recorded strolling in and out of “sensitive buildings,” including certain medical clinics, law offices, and foreign consulates. “I think this is my big concern,” he said, seeing the maps.
Accordingly, Gizmodo located cameras in unnerving proximity to such sensitive buildings, including a clinic offering abortion services and a legal office that handles immigration and refugee cases.
It is possible to acquire Neighbors posts from anywhere in the country, in near-real-time, and sort them in any number of ways. Nearly 4,000 posts, for example, reference children, teens, or young adults; two purportedly involve people having sex; eight mention Immigration and Customs Enforcement; and more than 3,600 mention dogs, cats, coyotes, turkeys, and turtles.
While the race of individuals recorded is implicitly suggested in a variety of ways, Gizmodo found 519 explicit references to blackness and 319 to whiteness. A Ring spokesperson said the Neighbors content moderators strive to eliminate unessential references to skin color. Moderators are told to remove posts, they said, in which the sole identifier of a subject is that they’re “black” or “white.”
Ring’s guidelines instruct users: “Personal attributes like race, ethnicity, nationality, religion, sexual orientation, immigration status, sex, gender, age, disability, socioeconomic and veteran status, should never be factors when posting about an unknown person. This also means not referring to a person you are describing solely by their race or calling attention to other personal attributes not relevant to the matter being reported.”
“There’s no question, if most people were followed around 24/7 by a police officer or a private investigator it would bother them and they would complain and seek a restraining order,” said Jay Stanley, senior policy analyst at the American Civil Liberties Union. “If the same is being done technologically, silently and invisibly, that’s basically the functional equivalent.”
[…]
Companies like Ring have long argued—as Google did when it published millions of people’s faces on Street View in 2007—that pervasive street surveillance reveals, in essence, no more than what people have already made public; that there’s no difference between blanketing public spaces in internet-connected cameras and the human experience of walking or driving down the street.
But not everyone agrees.
“Persistence matters,” said Stanley, while acknowledging the ACLU’s long history of defending public photography. “I can go out and take a picture of you walking down the sidewalk on Main Street and publish it on the front of tomorrow’s newspaper,” he said. “That said, when you automate things, it makes it faster, cheaper, easier, and more widespread.”
Stanley and others devoted to studying the impacts of public surveillance envision a future in which Americans’ very perception of reality has become tainted by a kind of omnipresent observer effect. Children will grow up, it’s feared, equating the act of being outside with being recorded. The question is whether existing in this observed state will fundamentally alter the way people naturally behave in public spaces—and if so, how?
“It brings a pervasiveness and systematization that has significant potential effects on what it means to be a human being walking around your community,” Stanley said. “Effects we’ve never before experienced as a species, in all of our history.”
The Ring data has given Gizmodo the means to consider scenarios, no longer purely hypothetical, which exemplify what daily life is like under Amazon’s all-seeing eye. In the nation’s capital, for instance, walking the shortest route from one public charter school to a soccer field less than a mile away, 6th-12th graders are recorded by no fewer than 13 Ring cameras.
Gizmodo found that dozens of users in the same Washington, DC, area have used Neighbors to share videos of children. Thirty-six such posts describe mostly run-of-the-mill mischief—kids with “no values” ripping up parking tape, riding on their “dort-bikes” [sic] and taking “selfies.”
Ring’s guidelines state that users are supposed to respect “the privacy of others,” and not upload footage of “individuals or activities where a reasonable person would expect privacy.” Users are left to interpret this directive themselves, though Ring’s content moderators are supposedly actively combing through the posts and users can flag “inappropriate” posts for review.
Ángel Díaz, an attorney at the Brennan Center for Justice focusing on technology and policing, said the “sheer size and scope” of the data Ring amasses is what separates it from other forms of public photography.
[…]
Guariglia, who’s been researching police surveillance for a decade and holds a PhD in the subject, said he believes the hidden coordinates invalidate Ring’s claim that only users decide “what information, if any,” gets shared with police—whether they’ve yet to acquire it or not.
“I’ve never really bought that argument,” he said, adding that if they truly wanted, the police could “very easily figure out where all the Ring cameras are.”
The Guardian reported in August that Ring once shared maps with police depicting the locations of active Ring cameras. CNET reported last week, citing public documents, that police partnered with Ring had once been given access to “heat maps” that reflected the area where cameras were generally concentrated.
The privacy researcher who originally obtained the heat maps, Shreyas Gandlur, discovered that if police zoomed in far enough, circles appeared around individual cameras. However, Ring denied that the maps, which it said displayed “approximate device density,” and instructed police not to share publicly, accurately portrayed the locations of customers.
Nikon is ending its authorized repair program in early 2020, likely leaving more than a dozen repair shops without access to official parts and tools, and cutting the number of places you can get your camera fixed with official parts from more than a dozen independent shops to two facilities at the ends of the U.S.
That means that Nikon’s roughly 15 remaining Authorized Repair Station members are about to become non-authorized repair shops. Since Nikon decided to stop selling genuine parts to non-authorized shops back in 2012, it’s unlikely those stores will continue to have access to the specialty components, tools, software, manuals, and model training Nikon previously provided. But Nikon hasn’t clarified this, so repair shops have been left in the dark.
“This is very big, and we have no idea what’s coming next,” said Cliff Hanks, parts manager for Kurt’s Camera Repair in San Diego, Calif. “We need more information before March 31. We can make contingency plans, start stocking up on stuff, but when will we know for sure?”
In a letter obtained by iFixit, Nikon USA told its roughly 15 remaining Authorized Repair Station members in early November that it would not renew their agreements after March 31, 2020. The letter notes that “The climate in which we do business has evolved, and Nikon Inc. must do the same.” And so, Nikon writes, it must “change the manner in which we make product service available to our end user customers.”
In other words: Nikon’s camera business, slowly bled by smartphones, is going to adopt a repair model that’s even more restrictive than that of Apple or other smartphone makers. If your camera breaks, and you want it fixed with official parts or under warranty, you’ll now have to mail it to one of two ends of the country. This is more than a little inconvenient, especially for professional photographers.
Boring 2D images can be transformed into corresponding 3D models and back into 2D again automatically by machine-learning-based software, boffins have demonstrated.
The code is known as a differentiable interpolation-based renderer (DIB-R), and was built by a group of eggheads led by Nvidia. It uses a trained neural network to take a flat image of an object as inputs, work out how it is shaped, colored and lit in 3D, and outputs a 2D rendering of that model.
This research could be useful in future for teaching robots and other computer systems how to work out how stuff is shaped and lit in real life from 2D still pictures or video frames, and how things appear to change depending on your view and lighting. That means future AI could perform better, particularly in terms of depth perception, in scenarios in which the lighting and positioning of things is wildly different from what’s expected.
Jun Gao, a graduate student at the University of Toronto in Canada and a part-time researcher at Nvidia, said: “This is essentially the first time ever that you can take just about any 2D image and predict relevant 3D properties.”
During inference, the pixels in each studied photograph are separated into two groups: foreground and background. The rough shape of the object is discerned from the foreground pixels to create a mesh of vertices.
Next, a trained convolutional neural network (CNN) predicts the 3D position and lighting of each vertex in the mesh to form a 3D object model. This model is then rendered as a full-color 2D image using a suitable shader. This allows the boffins to compare the original 2D object to the rendered 2D object to see how well the neural network understood the lighting and shape of the thing.
You looking for an AI project? You love Lego? Look no further than this Reg reader’s machine-learning Lego sorter
During the training process, the CNN was shown stuff in 13 categories in the ShapeNet dataset. Each 3D model was rendered as 2D images viewed from 24 different angles to create a set of training images: these images were used to show the network how 2D images relate to 3D models.
Crucially, the CNN was schooled using an adversarial framework, in which the DIB-R outputs were passed through a discriminator network for analysis.
If a rendered object was similar enough to an input object, then DIB-R’s output passed the discriminator. If not, the output was rejected and the CNN had to generate ever more similar versions until it was accepted by the discriminator. Over time, the CNN learned to output realistic renderings. Further training is required to generate shapes outside of the training data, we note.
As we mentioned above, DIB-R could help robots better detect their environments, Nvidia’s Lauren Finkle said: “For an autonomous robot to interact safely and efficiently with its environment, it must be able to sense and understand its surroundings. DIB-R could potentially improve those depth perception capabilities.”
Academics from three universities across Europe have disclosed today a new attack that impacts the integrity of data stored inside Intel SGX, a highly-secured area of Intel CPUs.
The attack, which researchers have named Plundervolt, exploits the interface through which an operating system can control an Intel processor’s voltage and frequency — the same interface that allows gamers to overclock their CPUs.
Academics say they discovered that by tinkering with the amount of voltage and frequency a CPU receives, they can alter bits inside SGX to cause errors that can be exploited at a later point after the data has left the security of the SGX enclave.
They say Plundervolt can be used to recover encryption keys or introduce bugs in previously secure software.
Online shoppers typically string together a few words to search for the product they want, but in a world with millions of products and shoppers, the task of matching those unspecific words to the right product is one of the biggest challenges in information retrieval.
Using a divide-and-conquer approach that leverages the power of compressed sensing, computer scientists from Rice University and Amazon have shown they can slash the amount of time and computational resources it takes to train computers for product search and similar “extreme classification problems” like speech translation and answering general questions.
The research will be presented this week at the 2019 Conference on Neural Information Processing Systems (NeurIPS 2019) in Vancouver. The results include tests performed in 2018 when lead researcher Anshumali Shrivastava and lead author Tharun Medini, both of Rice, were visiting Amazon Search in Palo Alto, California.
In tests on an Amazon search dataset that included some 70 million queries and more than 49 million products, Shrivastava, Medini and colleagues showed their approach of using “merged-average classifiers via hashing,” (MACH) required a fraction of the training resources of some state-of-the-art commercial systems.
“Our training times are about 7-10 times faster, and our memory footprints are 2-4 times smaller than the best baseline performances of previously reported large-scale, distributed deep-learning systems,” said Shrivastava, an assistant professor of computer science at Rice.
[…]
“Extreme classification problems” are ones with many possible outcomes, and thus, many parameters. Deep learning models for extreme classification are so large that they typically must be trained on what is effectively a supercomputer, a linked set of graphics processing units (GPU) where parameters are distributed and run in parallel, often for several days.
“A neural network that takes search input and predicts from 100 million outputs, or products, will typically end up with about 2,000 parameters per product,” Medini said. “So you multiply those, and the final layer of the neural network is now 200 billion parameters. And I have not done anything sophisticated. I’m talking about a very, very dead simple neural network model.”
“It would take about 500 gigabytes of memory to store those 200 billion parameters,” Medini said. “But if you look at current training algorithms, there’s a famous one called Adam that takes two more parameters for every parameter in the model, because it needs statistics from those parameters to monitor the training process. So, now we are at 200 billion times three, and I will need 1.5 terabytes of working memory just to store the model. I haven’t even gotten to the training data. The best GPUs out there have only 32 gigabytes of memory, so training such a model is prohibitive due to massive inter-GPU communication.”
MACH takes a very different approach. Shrivastava describes it with a thought experiment randomly dividing the 100 million products into three classes, which take the form of buckets. “I’m mixing, let’s say, iPhones with chargers and T-shirts all in the same bucket,” he said. “It’s a drastic reduction from 100 million to three.”
In the thought experiment, the 100 million products are randomly sorted into three buckets in two different worlds, which means that products can wind up in different buckets in each world. A classifier is trained to assign searches to the buckets rather than the products inside them, meaning the classifier only needs to map a search to one of three classes of product.
“Now I feed a search to the classifier in world one, and it says bucket three, and I feed it to the classifier in world two, and it says bucket one,” he said. “What is this person thinking about? The most probable class is something that is common between these two buckets. If you look at the possible intersection of the buckets there are three in world one times three in world two, or nine possibilities,” he said. “So I have reduced my search space to one over nine, and I have only paid the cost of creating six classes.”
Adding a third world, and three more buckets, increases the number of possible intersections by a factor of three. “There are now 27 possibilities for what this person is thinking,” he said. “So I have reduced my search space by one over 27, but I’ve only paid the cost for nine classes. I am paying a cost linearly, and I am getting an exponential improvement.”
In their experiments with Amazon’s training database, Shrivastava, Medini and colleagues randomly divided the 49 million products into 10,000 classes, or buckets, and repeated the process 32 times. That reduced the number of parameters in the model from around 100 billion to 6.4 billion. And training the model took less time and less memory than some of the best reported training times on models with comparable parameters, including Google’s Sparsely-Gated Mixture-of-Experts (MoE) model, Medini said.
He said MACH’s most significant feature is that it requires no communication between parallel processors. In the thought experiment, that is what’s represented by the separate, independent worlds.
“They don’t even have to talk to each other,” Medini said. “In principle, you could train each of the 32 on one GPU, which is something you could never do with a nonindependent approach.”
Three weeks after the Internet Society announced the controversial sale of the .org internet registry to an unknown private equity firm, the organization that has to sign off on the deal has finally spoken publicly.
In a letter [PDF] titled “Transparency” from the general counsel of domain name system overseer ICANN to the CEOs of the Internet Society (ISOC) and .org registry operator PIR, the organization takes issue with how the proposed sale has been handled and notes that it is “uncomfortable” at the lack of transparency.
The letter, dated Monday and posted today with an accompanying blog post, notes that ICANN will be sending a “detailed request for additional information” and encourages the organizations “to answer these questions fully and as transparently as possible.”
As ICANN’s chairman previously toldThe Register, the organization received an official request to change ownership of PIR from ISOC to Ethos Capital in mid-November but denied ICANN’s request to make it public.
The letter presses ISOC/PIR to make that request public. “While PIR has previously declined our request to publish the Request, we urge you to reconsider,” the letter states. “We also think there would be great value for us to publish the questions that you are asked and your answers to those questions.”
Somewhat unusually it repeats the same point a second time: “In light of the level of interest in the recently announced acquisition of PIR, both within the ICANN community and more generally, we continue to believe that it is critical that your Request, and the questions and answers in follow up to the Request, and any other related materials, be made Public.”
Third time lucky
And then, stressing the same point a third time, the letter notes that on a recent webinar about the sale organized by concerned non-profits that use .org domains, ISOC CEO Andrew Sullivan said he wasn’t happy about the level of secrecy surrounding the deal.
From the ICANN letter: “As you, Andrew, ISOC’s CEO stated publicly during a webcast meeting… you are uncomfortable with the lack of transparency. Many of us watching the communications on this transaction are also uncomfortable.
“In sum, we again reiterate our belief that it is imperative that you commit to completing this process in an open and transparent manner, starting with publishing the Request and related material, and allowing us to publish our questions to you, and your full Responses.”
Here is what Sullivan said on the call [PDF]: “I do appreciate, however, that this creates a level of uncertainty, because people are uncomfortable with things that are done in secret like that. I get it. I can have the same reaction what I’m not included in a decision, but that is the reason we have trustees. That’s the reason that we have our trustees selected by our community. And I believe that we made the right decision.”
As ICANN noted, there remain numerous questions over the proposed sale despite both ISOC and Ethos Capital holding meetings with concerned stakeholders, and ISOC’s CEO agreeing to an interview with El Reg.
One concerned .org owner is open-source organization Mozilla, which sent ICANN a letter noting that it “remains concerned that the nature of the modified contractual agreement between ICANN and the registry does not contain sufficient safeguards to ensure that the promises we hear today will be kept.”
It put forward a series of unanswered questions that it asked ICANN to request of PIR. They include [PDF] questions over the proposed “stewardship council” that Ethos Capital has said it will introduce to make sure the rights of .org domain holders are protected, including its degree of independence; what assurances there are that Ethos Capital will actually stick to its implied promise that it won’t increase .org prices by more than 10 per cent per year; and details around its claim that PIR will become a so-called B Corp – a designation that for-profit companies can apply for if they wish to indicate a wider public interest remit.
Connections
While those questions dig into the future running of the .org registry, they do not dig into the unusual connections between the CEOs of ISOC, PIR and Ethos Capital, as well as their advisors.
The CEO of ISOC, Andrew Sullivan worked for a company called Afilias between 2002 and 2008. It was Afilias that persuaded ISOC to apply to run the .org registry in the first place and Sullivan is credited with writing significant parts of its final application. Afilias has run the .org back-end since 2003. Sullivan became ISOC CEO in June 2018.
The CEO of PIR, Jonathon Nevett, took over the job in December 2018. Immediately prior to that, he was Executive VP for a registry company called Donuts, which he also co-founded. Donuts was sold in September 2018 to a private equity company called Abry Partners.
At Abry Partners at the time was Eric Brooks, who left the company after 20 years at some point in 2019 to become the CEO of Ethos Capital – the company purchasing PIR. Also at Abry Partners at the time was Fadi Chehade, a former CEO of ICANN. Chehade is credited as being a “consultant” over the sale of PIR to Ethos Capital but records demonstrate that Chehade registered its domain name – ethoscapital.com – personally.
Chehade is also thought to have personally registered Ethos Capital as a Delaware corporation on May 14 this year: an important date because it was the day after his former organization, ICANN, indicated it was going to approve the lifting of price caps on .org domains, against the strong opposition of the internet community.
Now comes the ICA
As well as Mozilla’s questions, there is another series of questions [PDF] over the sale from the Internet Commerce Association (ICA) that are pointed at ICANN itself.
Those questions focus on the timeline of information: what ICANN knew about the proposed sale and when; and whether it was aware of the intention to sell PIR when it approved lifting price caps on the .org registry.
It also asked various governance questions about ICANN including why the renewed .org contract was not approved by the ICANN board, the involvement of former ICANN executives, including Chehade and former senior vice president Nora Abusitta-Ouri who is “chief purpose officer” of Ethos Capital, and what policies ICANN has in place over “cooling off periods” for former execs.
While going out of its way to criticize ISOC and PIR for their lack of transparency and while claiming in the letter to ISOC that “transparency is a cornerstone of ICANN and how ICANN acts to protect the public interest while performing its role,” ICANN has yet to answer questions over its own role.
The ability to perceive the shape and motion of hands can be a vital component in improving the user experience across a variety of technological domains and platforms. For example, it can form the basis for sign language understanding and hand gesture control, and can also enable the overlay of digital content and information on top of the physical world in augmented reality. While coming naturally to people, robust real-time hand perception is a decidedly challenging computer vision task, as hands often occlude themselves or each other (e.g. finger/palm occlusions and hand shakes) and lack high contrast patterns. Today we are announcing the release of a new approach to hand perception, which we previewed CVPR 2019 in June, implemented in MediaPipe—an open source cross platform framework for building pipelines to process perceptual data of different modalities, such as video and audio. This approach provides high-fidelity hand and finger tracking by employing machine learning (ML) to infer 21 3D keypoints of a hand from just a single frame. Whereas current state-of-the-art approaches rely primarily on powerful desktop environments for inference, our method achieves real-time performance on a mobile phone, and even scales to multiple hands. We hope that providing this hand perception functionality to the wider research and development community will result in an emergence of creative use cases, stimulating new applications and new research avenues.
3D hand perception in real-time on a mobile phone via MediaPipe. Our solution uses machine learning to compute 21 3D keypoints of a hand from a video frame. Depth is indicated in grayscale.
An ML Pipeline for Hand Tracking and Gesture Recognition Our hand tracking solution utilizes an ML pipeline consisting of several models working together:
A palm detector model (called BlazePalm) that operates on the full image and returns an oriented hand bounding box.
A hand landmark model that operates on the cropped image region defined by the palm detector and returns high fidelity 3D hand keypoints.
A gesture recognizer that classifies the previously computed keypoint configuration into a discrete set of gestures.
This architecture is similar to that employed by our recently published face meshML pipeline and that others have used for pose estimation. Providing the accurately cropped palm image to the hand landmark model drastically reduces the need for data augmentation (e.g. rotations, translation and scale) and instead allows the network to dedicate most of its capacity towards coordinate prediction accuracy.
Hand perception pipeline overview.
BlazePalm: Realtime Hand/Palm Detection To detect initial hand locations, we employ a single-shot detector model called BlazePalm, optimized for mobile real-time uses in a manner similar to BlazeFace, which is also available in MediaPipe. Detecting hands is a decidedly complex task: our model has to work across a variety of hand sizes with a large scale span (~20x) relative to the image frame and be able to detect occluded and self-occluded hands. Whereas faces have high contrast patterns, e.g., in the eye and mouth region, the lack of such features in hands makes it comparatively difficult to detect them reliably from their visual features alone. Instead, providing additional context, like arm, body, or person features, aids accurate hand localization. Our solution addresses the above challenges using different strategies. First, we train a palm detector instead of a hand detector, since estimating bounding boxes of rigid objects like palms and fists is significantly simpler than detecting hands with articulated fingers. In addition, as palms are smaller objects, the non-maximum suppression algorithm works well even for two-hand self-occlusion cases, like handshakes. Moreover, palms can be modelled using square bounding boxes (anchors in ML terminology) ignoring other aspect ratios, and therefore reducing the number of anchors by a factor of 3-5. Second, an encoder-decoder feature extractor is used for bigger scene context awareness even for small objects (similar to the RetinaNet approach). Lastly, we minimize the focal loss during training to support a large amount of anchors resulting from the high scale variance. With the above techniques, we achieve an average precision of 95.7% in palm detection. Using a regular cross entropy loss and no decoder gives a baseline of just 86.22%.
Two browsers have yanked Avast and AVG online security extensions from their web stores after a report revealed that they were unnecessarily sucking up a ton of data about users’ browsing history.
Wladimir Palant, the creator behind Adblock Plus, initially surfaced the issue—which extends to Avast Online Security and Avast SafePrice as well as Avast-owned AVG Online Security and AVG SafePrice extensions—in a blog post back in October but this week flagged the issue to the companies themselves. In response, both Mozilla and Opera yanked the extensions from their stores. However, as of Wednesday, the extensions curiously remained in Google’s extensions store.
Using dev tools to examine network traffic, Palant was able to determine that the extensions were collecting an alarming amount of data about users’ browsing history and activity, including URLs, where you navigated from, whether the page was visited in the past, the version of browser you’re using, country code, and, if the Avast Antivirus is installed, the OS version of your device, among other data. Palant argued the data collection far exceeded what was necessary for the extensions to perform their basic jobs.
DART is a planetary defense-driven test of technologies for preventing an impact of Earth by a hazardous asteroid. DART will be the first demonstration of the kinetic impactor technique to change the motion of an asteroid in space. The DART mission is in Phase C, led by APL and managed under NASA’s Solar System Exploration Program at Marshall Space Flight Center for NASA’s Planetary Defense Coordination Office and the Science Mission Directorate’s Planetary Science Division at NASA Headquarters in Washington, DC.
Two different views of the DART spacecraft. The DRACO (Didymos Reconnaissance & Asteroid Camera for OpNav) imaging instrument is based on the LORRI high-resolution imager from New Horizons. The left view also shows the Radial Line Slot Array (RLSA) antenna with the ROSAs (Roll-Out Solar Arrays) rolled up. The view on the right shows a clearer view of the NEXT-C ion engine.
The binary near-Earth asteroid (65803) Didymos is the target for the DART demonstration. While the Didymos primary body is approximately 780 meters across, its secondary body (or “moonlet”) is about 160-meters in size, which is more typical of the size of asteroids that could pose the most likely significant threat to Earth. The Didymos binary is being intensely observed using telescopes on Earth to precisely measure its properties before DART arrives.
Fourteen sequential Arecibo radar images of the near-Earth asteroid (65803) Didymos and its moonlet, taken on 23, 24 and 26 November 2003. NASA’s planetary radar capabilities enable scientists to resolve shape, concavities, and possible large boulders on the surfaces of these small worlds. Photometric lightcurve data indicated that Didymos is a binary system, and radar imagery distinctly shows the secondary body.
Simulated image of the Didymos system, derived from photometric lightcurve and radar data. The primary body is about 780 meters in diameter and the moonlet is approximately 160 meters in size. They are separated by just over a kilometer. The primary body rotates once every 2.26 hours while the tidally locked moonlet revolves about the primary once every 11.9 hours. Almost one sixth of the known near-Earth asteroid (NEA) population are binary or multiple-body systems.
Credits: Naidu et al., AIDA Workshop, 2016
Illustration of the DART spacecraft with the Roll Out Solar Arrays (ROSA) extended. Each of the two ROSA arrays in 8.6 meters by 2.3 meters.
The DART spacecraft will achieve the kinetic impact deflection by deliberately crashing itself into the moonlet at a speed of approximately 6.6 km/s, with the aid of an onboard camera (named DRACO) and sophisticated autonomous navigation software. The collision will change the speed of the moonlet in its orbit around the main body by a fraction of one percent, but this will change the orbital period of the moonlet by several minutes – enough to be observed and measured using telescopes on Earth.
Once launched, DART will deploy Roll Out Solar Arrays (ROSA) to provide the solar power needed for DART’s electric propulsion system. The DART spacecraft will demonstrate the NASA Evolutionary Xenon Thruster – Commercial (NEXT-C) solar electric propulsion system as part of its in-space propulsion. NEXT-C is a next-generation system based on the Dawn spacecraft propulsion system, and was developed at NASA’s Glenn Research Center in Cleveland, Ohio. By utilizing electric propulsion, DART could benefit from significant flexibility to the mission timeline while demonstrating the next generation of ion engine technology, with applications to potential future NASA missions.
The ROSA array was tested on board the International Space Station (ISS) in June 2017.
Once launched, DART will deploy Roll Out Solar Arrays (ROSA) to provide the solar power needed for DART’s electric propulsion system. The DART spacecraft will demonstrate the NASA Evolutionary Xenon Thruster – Commercial (NEXT-C)solar electric propulsion system as part of its in-space propulsion. NEXT-C is a next-generation system based on the Dawn spacecraft propulsion system, and was developed at NASA’s Glenn Research Center in Cleveland, Ohio. By utilizing electric propulsion, DART could benefit from significant flexibility to the mission timeline while demonstrating the next generation of ion engine technology, with applications to potential future NASA missions.
The DART spacecraft launch window begins in late July 2021. DART will launch aboard a SpaceX Falcon 9 rocket from Vandenberg Air Force Base, California. After separation from the launch vehicle and over a year of cruise it will intercept Didymos’ moonlet in late September 2022, when the Didymos system is within 11 million kilometers of Earth, enabling observations by ground-based telescopes and planetary radar to measure the change in momentum imparted to the moonlet.
Personal information and what they bought, where it was delivered to.
De gegevens van vermoedelijk bijna 10.000 Belgische en Nederlandse klanten die een paar jaar geleden online speelgoed kochten, worden door een hacker te koop aangeboden op het internet. Dat blijkt uit onderzoek van VRT NWS. Het gaat om persoonlijke gegevens en bepaalde aankopen van mensen. De overgrote meerderheid van de producten werden gekocht bij een lokale Nederlandse ondernemer via onder meer webwinkel Bol.com. Die hebben meteen een onderzoek geopend naar de ondernemer waar het lek bleek te zitten.
Het bestand met klantengegevens wordt aangeboden op een gespecialiseerd hackersforum op het internet, waar de oplichter beweert een ‘bol.com-database’ te hebben.
In het bestand kan je zien wat mensen gekocht hebben, wat hun voor- en achternaam is en soms ook wat de aankoop kost. Daarnaast zijn ook bezorggegevens beschikbaar. Ook zie je welke betalingswijze mensen hebben gekozen, zoals een kredietkaart of bancontact.
Lek bij Toppie Speelgoed, externe partner Bol.com
Onderzoek leert dat het bestand inderdaad aankoopgegevens bevat van mensen die via Bol.com speelgoed kochten. Na contact met Bol.com en een intern onderzoek bij de webshop zelf blijkt dat het datalek zit bij een partner van Bol.com die speelgoed verkoopt op onder meer bol.com en eigen webshops. Het gaat om Toppie Speelgoed. Wie rechtstreeks bij Toppie Speelgoed kocht, duikt ook met e-mailadres en telefoonnummer op in de lijst, als dat bij de aankoop werd achtergelaten. Wie via Bol.com een product kocht, enkel met naam en afleveradres. Dat komt omdat Bol.com slechts beperkte gegevens naar externe partners stuurt.
De persoonsgegevens van mogelijk 29.000 klanten van energiebedrijven Budget Energie en NLE liggen op straat. Naast namen en adressen is er kans dat er ook telefoonnummers en bankrekeningnummers zijn gelekt. De data is niet per ongeluk gelekt, het gaat volgens het bedrijf om een moedwillige diefstal.
Moederbedrijf Nuts Groep heeft klanten van Budget Energie en NLE vanmorgen per e-mail op de hoogte gebracht van het datalek. Volgens het bedrijf gaat het niet om een softwarelek maar om ‘ongeautoriseerde toegang’ tot contractgegevens.
Politie-onderzoek
Het gaat om mogelijk 29.000 van de in totaal 700.000 klanten van de energiebedrijven. “Er is een onderzoek gestart door de politie. Zo lang dat loopt, doen wij geen uitspraken over de oorzaak van het lek en het aantal betrokkenen”, zegt Babette Huberts, manager legal van Nuts Groep tegen RTL Z. Ook wil Huberts niet kwijt hoe het lek is ontdekt.
Later op de dag heeft Huberts laten weten dat het gaat om een moedwillige actie.
Fears of Russian interference ahead of a heated U.K. election were all but confirmed this week with a Reddit post.
In a post Friday, Reddit announced that its internal investigation found evidence that an account purportedly linked to Russian disinformation campaign was behind last month’s leak of contentious US-UK trade documents on the platform.
“We were recently made aware of a post on Reddit that included leaked documents from the UK. We investigated this account and the accounts connected to it, and today we believe this was part of a campaign that has been reported as originating from Russia,” Reddit wrote.
The online message board went on to say it’s banned 61 accounts and suspended one subreddit, r/ukwhistleblower, behind the campaign for violating the platform’s policies against vote manipulation and misuse. Reddit also purportedly found evidence linking this operation to another group behind similar foreign interference on Facebook earlier this year. The Atlantic Council’s dubbed them “Secondary Infektion” in reference to a misinformation campaign from the Soviet era.
“Suspect accounts on Reddit were recently reported to us, along with indicators from law enforcement, and we were able to confirm that they did indeed show a pattern of coordination,” Reddit said. “We were then able to use these accounts to identify additional suspect accounts that were part of the campaign on Reddit. This group provides us with important attribution for the recent posting of the leaked UK documents, as well as insights into how adversaries are adapting their tactics.”
The account behind the original Reddit leak as well as a number of others that reposted the documents and manipulated its upvotes and karma (ways to earn a post a more prominent placement in a subreddit) all used identical tactics as Secondary Infektion, according to Reddit, “causing us to believe that this was indeed tied to the original group.”
The papers in question detail trade talks between America and the UK and have launched a fiery debate among British officials leading up to the country’s general election. Labor Party leader Jeremy Corbyn claims these documents prove officials plan to put the country’s National Healthcare Service is at risk of being privatized in the event of a post-Brexit trade agreement with America. Prime Minister Boris Johnson has denied this, saying NHS wouldn’t be on the table in any future trade negotiations.
This isn’t the first time Reddit’s struggled with sussing out foreign propaganda campaigns on its platform. Russian influence operations have become a particularly insidious and reoccurring problem, leading Reddit to ban 944 “suspicious” accounts in April 2018 after purportedly tracing them back to Russia’s Internet Research Industry (IRA), the infamous troll factory behind pro-Trump efforts during the 2016 presidential campaign.
Later that September, Reddit users began to speculate that the notoriously awful (and now, thankfully, quarantined) subreddit r/The_Donald had become infiltrated by Russian trolls as well. Suspicions began circulating among its three-quarters of a million subscribers after a viral post documented clear signs of a pattern: The same few articles from websites affiliated with the IRA were being upvoted and shared in the forum thousands of times, and it’d been going on for years, according to a Buzzfeed News report. Reddit later issued a platform-wide ban for three of the trolls’ most commonly linked websites, USA Really, GEOTUS.band and GEOTUS.army.
A separate investigation Reddit launched around that same time uncovered 143 accounts linked to another influence operation reportedly targeting polarized subreddits on both sides of the aisle with pro-Iranian political narratives. Reddit began its inquiry after cybersecurity group FireEye released a report detailing just how far the campaign’s influence spanned, as bad actors were purportedly “leveraging a network of inauthentic news sites and clusters of associated accounts across multiple social media platforms.” Based on these findings, Facebook, Twitter, and Google also subsequently removed a bevy of accounts affiliated with Iran and Russia on their respective platforms.
ProPublica has determined that dozens of state and local agencies have purchased “SCAN” training from a company called LSI for reviewing a suspect’s written statements — even though there’s no scientific evidence that it works. Local, state and federal agencies from the Louisville Metro Police Department to the Michigan State Police to the U.S. State Department have paid for SCAN training. The LSI website lists 417 agencies nationwide, from small-town police departments to the military, that have been trained in SCAN — and that list isn’t comprehensive, because additional ones show up in procurement databases and in public records obtained by ProPublica. Other training recipients include law enforcement agencies in Australia, Belgium, Canada, Israel, Mexico, the Netherlands, Singapore, South Africa and the United Kingdom, among others…
For Avinoam Sapir, the creator of SCAN, sifting truth from deception is as simple as one, two, three.
1. Give the subject a pen and paper.
2. Ask the subject to write down his/her version of what happened.
3. Analyze the statement and solve the case.
Those steps appear on the website for Sapir’s company, based in Phoenix. “SCAN Unlocks the Mystery!” the homepage says, alongside a logo of a question mark stamped on someone’s brain. The site includes dozens of testimonials with no names attached. “Since January when I first attended your course, everybody I meet just walks up to me and confesses!” one says. [Another testimonial says “The Army finally got its money’s worth…”] SCAN saves time, the site says. It saves money. Police can fax a questionnaire to a hundred people at once, the site says. Those hundred people can fax it back “and then, in less than an hour, the investigator will be able to review the questionnaires and solve the case.”
In 2009 the U.S. government created a special interagency task force to review scientific studies and independently investigate which interrogation techniques worked, assessed by the FBI, CIA and the U.S. Department of Defense. “When all 12 SCAN criteria were used in a laboratory study, SCAN did not distinguish truth-tellers from liars above the level of chance,” the review said, also challenging two of the method’s 12 criteria. “Both gaps in memory and spontaneous corrections have been shown to be indicators of truth, contrary to what is claimed by SCAN.” In a footnote, the review identified three specific agencies that use SCAN: the FBI, CIA and U.S. Army military intelligence, which falls under the Department of Defense…
In 2016, the same year the federal task force released its review of interrogation techniques, four scholars published a study on SCAN in the journal Frontiers in Psychology. The authors — three from the Netherlands, one from England — noted that there had been only four prior studies in peer-reviewed journals on SCAN’s effectiveness. Each of those studies (in 1996, 2012, 2014 and 2015) concluded that SCAN failed to help discriminate between truthful and fabricated statements. The 2016 study found the same. Raters trained in SCAN evaluated 234 statements — 117 true, 117 false. Their results in trying to separate fact from fiction were about the same as chance….
Steven Drizin, a Northwestern University law professor who specializes in wrongful convictions, said SCAN and assorted other lie-detection tools suffer from “over-claim syndrome” — big claims made without scientific grounding. Asked why police would trust such tools, Drizin said: “A lot has to do with hubris — a belief on the part of police officers that they can tell when someone is lying to them with a high degree of accuracy. These tools play in to that belief and confirm that belief.”
SCAN’s creator “declined to be interviewed for this story,” but they spoke to some users of the technique. Travis Marsh, the head of an Indiana sheriff’s department, has been using the tool for nearly two decades, while acknowledging that he can’t explain how it works. “It really is, for lack of a better term, a faith-based system because you can’t see behind the curtain.”
Pro Publica also reports that “Years ago his wife left a note saying she and the kids were off doing one thing, whereas Marsh, analyzing her writing, could tell they had actually gone shopping. His wife has not left him another note in at least 15 years…”
The study, published in Lancet Public Health on Monday, is a review of existing research that looked at how commonly traumatic brain injuries happen among people. It specifically included studies that also took into account people’s housing situation. These studies involved more than 11,000 people who were fully or partially homeless at the time and living in the U.S., UK, Japan, or Canada. And 26 of the 38 originally reviewed studies were included in a deeper meta-analysis.
Taken as a whole, the review found that around 53 percent of homeless people had experienced a traumatic brain injury (TBI) at some time in their lives. Among people who reported how seriously they had been hurt, about a quarter had experienced a moderate to severe head injury. Compared to the average person, the authors noted, homeless people are over twice as likely to have experienced any sort of head injuries and nearly 10 times as likely to have had a moderate to severe one.
“TBI is prevalent among homeless and marginally housed individuals and might be a common factor that contributes to poorer health and functioning than in the general population,” the researchers wrote.
Google CEO Sundar Pichai is adding another responsibility to his job: Pichai will also be the CEO of parent holding company Alphabet going forward, taking the helm from co-founder and longtime CEO Larry Page.
Additionally, co-founder Sergey Brin will be resigning from his post as the president of Alphabet. Brin and Page jointly announced the leadership change in a blog post Tuesday afternoon, writing:
“Alphabet and Google no longer need two CEOs and a President. Going forward, Sundar will be the CEO of both Google and Alphabet. He will be the executive responsible and accountable for leading Google, and managing Alphabet’s investment in our portfolio of Other Bets.”
“We are deeply committed to Google and Alphabet for the long term, and will remain actively involved as Board members, shareholders and co-founders. In addition, we plan to continue talking with Sundar regularly, especially on topics we’re passionate about,” the duo wrote.
Pichai has been with Google since 2004, and oversaw several of the company’s key products before becoming CEO of Google in 2015 when the search giant reorganized its corporate structure.