Ring’s Neighbors Data Let Us Map Amazon’s Home Surveillance Network

As reporters raced this summer to bring new details of Ring’s law enforcement contracts to light, the home security company, acquired last year by Amazon for a whopping $1 billion, strove to underscore the privacy it had pledged to provide users.

Even as its creeping objective of ensuring an ever-expanding network of home security devices eventually becomes indispensable to daily police work, Ring promised its customers would always have a choice in “what information, if any, they share with law enforcement.” While it quietly toiled to minimize what police officials could reveal about Ring’s police partnerships to the public, it vigorously reinforced its obligation to the privacy of its customers—and to the users of its crime-alert app, Neighbors.

However, a Gizmodo investigation, which began last month and ultimately revealed the potential locations of up to tens of thousands of Ring cameras, has cast new doubt on the effectiveness of the company’s privacy safeguards. It further offers one of the most “striking” and “disturbing” glimpses yet, privacy experts said, of Amazon’s privately run, omni-surveillance shroud that’s enveloping U.S. cities.

[…]

Gizmodo has acquired data over the past month connected to nearly 65,800 individual posts shared by users of the Neighbors app. The posts, which reach back 500 days from the point of collection, offer extraordinary insight into the proliferation of Ring video surveillance across American neighborhoods and raise important questions about the privacy trade-offs of a consumer-driven network of surveillance cameras controlled by one of the world’s most powerful corporations.

And not just for those whose faces have been recorded.

Examining the network traffic of the Neighbors app produced unexpected data, including hidden geographic coordinates that are connected to each post—latitude and longitude with up to six decimal points of precision, accurate enough to pinpoint roughly a square inch of ground.

Representing the locations of 440,000 Ring cameras collected from over 1,800 counties in the U.S.
Gizmodo found 5,016 unique Ring cameras while analyzing nine-square-miles of Los Angeles.

[…]

Guariglia and other surveillance experts told Gizmodo that the ubiquity of the devices gives rise to fears that pedestrians are being recorded strolling in and out of “sensitive buildings,” including certain medical clinics, law offices, and foreign consulates. “I think this is my big concern,” he said, seeing the maps.

Accordingly, Gizmodo located cameras in unnerving proximity to such sensitive buildings, including a clinic offering abortion services and a legal office that handles immigration and refugee cases.

It is possible to acquire Neighbors posts from anywhere in the country, in near-real-time, and sort them in any number of ways. Nearly 4,000 posts, for example, reference children, teens, or young adults; two purportedly involve people having sex; eight mention Immigration and Customs Enforcement; and more than 3,600 mention dogs, cats, coyotes, turkeys, and turtles.

While the race of individuals recorded is implicitly suggested in a variety of ways, Gizmodo found 519 explicit references to blackness and 319 to whiteness. A Ring spokesperson said the Neighbors content moderators strive to eliminate unessential references to skin color. Moderators are told to remove posts, they said, in which the sole identifier of a subject is that they’re “black” or “white.”

Ring’s guidelines instruct users: “Personal attributes like race, ethnicity, nationality, religion, sexual orientation, immigration status, sex, gender, age, disability, socioeconomic and veteran status, should never be factors when posting about an unknown person. This also means not referring to a person you are describing solely by their race or calling attention to other personal attributes not relevant to the matter being reported.”

“There’s no question, if most people were followed around 24/7 by a police officer or a private investigator it would bother them and they would complain and seek a restraining order,” said Jay Stanley, senior policy analyst at the American Civil Liberties Union. “If the same is being done technologically, silently and invisibly, that’s basically the functional equivalent.”

[…]

Companies like Ring have long argued—as Google did when it published millions of people’s faces on Street View in 2007—that pervasive street surveillance reveals, in essence, no more than what people have already made public; that there’s no difference between blanketing public spaces in internet-connected cameras and the human experience of walking or driving down the street.

But not everyone agrees.

“Persistence matters,” said Stanley, while acknowledging the ACLU’s long history of defending public photography. “I can go out and take a picture of you walking down the sidewalk on Main Street and publish it on the front of tomorrow’s newspaper,” he said. “That said, when you automate things, it makes it faster, cheaper, easier, and more widespread.”

Stanley and others devoted to studying the impacts of public surveillance envision a future in which Americans’ very perception of reality has become tainted by a kind of omnipresent observer effect. Children will grow up, it’s feared, equating the act of being outside with being recorded. The question is whether existing in this observed state will fundamentally alter the way people naturally behave in public spaces—and if so, how?

“It brings a pervasiveness and systematization that has significant potential effects on what it means to be a human being walking around your community,” Stanley said. “Effects we’ve never before experienced as a species, in all of our history.”

The Ring data has given Gizmodo the means to consider scenarios, no longer purely hypothetical, which exemplify what daily life is like under Amazon’s all-seeing eye. In the nation’s capital, for instance, walking the shortest route from one public charter school to a soccer field less than a mile away, 6th-12th graders are recorded by no fewer than 13 Ring cameras.

Gizmodo found that dozens of users in the same Washington, DC, area have used Neighbors to share videos of children. Thirty-six such posts describe mostly run-of-the-mill mischief—kids with “no values” ripping up parking tape, riding on their “dort-bikes” [sic] and taking “selfies.”

Ring’s guidelines state that users are supposed to respect “the privacy of others,” and not upload footage of “individuals or activities where a reasonable person would expect privacy.” Users are left to interpret this directive themselves, though Ring’s content moderators are supposedly actively combing through the posts and users can flag “inappropriate” posts for review.

Ángel Díaz, an attorney at the Brennan Center for Justice focusing on technology and policing, said the “sheer size and scope” of the data Ring amasses is what separates it from other forms of public photography.

[…]

Guariglia, who’s been researching police surveillance for a decade and holds a PhD in the subject, said he believes the hidden coordinates invalidate Ring’s claim that only users decide “what information, if any,” gets shared with police—whether they’ve yet to acquire it or not.

“I’ve never really bought that argument,” he said, adding that if they truly wanted, the police could “very easily figure out where all the Ring cameras are.”

The Guardian reported in August that Ring once shared maps with police depicting the locations of active Ring cameras. CNET reported last week, citing public documents, that police partnered with Ring had once been given access to “heat maps” that reflected the area where cameras were generally concentrated.

The privacy researcher who originally obtained the heat maps, Shreyas Gandlur, discovered that if police zoomed in far enough, circles appeared around individual cameras. However, Ring denied that the maps, which it said displayed “approximate device density,” and instructed police not to share publicly, accurately portrayed the locations of customers.

Source: Ring’s Neighbors Data Let Us Map Amazon’s Home Surveillance Network

Nikon Is Killing Its Authorized Repair Program

Nikon is ending its authorized repair program in early 2020, likely leaving more than a dozen repair shops without access to official parts and tools, and cutting the number of places you can get your camera fixed with official parts from more than a dozen independent shops to two facilities at the ends of the U.S.

That means that Nikon’s roughly 15 remaining Authorized Repair Station members are about to become non-authorized repair shops. Since Nikon decided to stop selling genuine parts to non-authorized shops back in 2012, it’s unlikely those stores will continue to have access to the specialty components, tools, software, manuals, and model training Nikon previously provided. But Nikon hasn’t clarified this, so repair shops have been left in the dark.

“This is very big, and we have no idea what’s coming next,” said Cliff Hanks, parts manager for Kurt’s Camera Repair in San Diego, Calif. “We need more information before March 31. We can make contingency plans, start stocking up on stuff, but when will we know for sure?”

In a letter obtained by iFixit, Nikon USA told its roughly 15 remaining Authorized Repair Station members in early November that it would not renew their agreements after March 31, 2020. The letter notes that “The climate in which we do business has evolved, and Nikon Inc. must do the same.” And so, Nikon writes, it must “change the manner in which we make product service available to our end user customers.”

In other words: Nikon’s camera business, slowly bled by smartphones, is going to adopt a repair model that’s even more restrictive than that of Apple or other smartphone makers. If your camera breaks, and you want it fixed with official parts or under warranty, you’ll now have to mail it to one of two ends of the country. This is more than a little inconvenient, especially for professional photographers.

Source: Nikon Is Killing Its Authorized Repair Program – iFixit

NVidia AI auto-generates 3D objects from 2D snaps

Boring 2D images can be transformed into corresponding 3D models and back into 2D again automatically by machine-learning-based software, boffins have demonstrated.

The code is known as a differentiable interpolation-based renderer (DIB-R), and was built by a group of eggheads led by Nvidia. It uses a trained neural network to take a flat image of an object as inputs, work out how it is shaped, colored and lit in 3D, and outputs a 2D rendering of that model.

This research could be useful in future for teaching robots and other computer systems how to work out how stuff is shaped and lit in real life from 2D still pictures or video frames, and how things appear to change depending on your view and lighting. That means future AI could perform better, particularly in terms of depth perception, in scenarios in which the lighting and positioning of things is wildly different from what’s expected.

Jun Gao, a graduate student at the University of Toronto in Canada and a part-time researcher at Nvidia, said: “This is essentially the first time ever that you can take just about any 2D image and predict relevant 3D properties.”

During inference, the pixels in each studied photograph are separated into two groups: foreground and background. The rough shape of the object is discerned from the foreground pixels to create a mesh of vertices.

Next, a trained convolutional neural network (CNN) predicts the 3D position and lighting of each vertex in the mesh to form a 3D object model. This model is then rendered as a full-color 2D image using a suitable shader. This allows the boffins to compare the original 2D object to the rendered 2D object to see how well the neural network understood the lighting and shape of the thing.

AI_lego_sorter

You looking for an AI project? You love Lego? Look no further than this Reg reader’s machine-learning Lego sorter

READ MORE

During the training process, the CNN was shown stuff in 13 categories in the ShapeNet dataset. Each 3D model was rendered as 2D images viewed from 24 different angles to create a set of training images: these images were used to show the network how 2D images relate to 3D models.

Crucially, the CNN was schooled using an adversarial framework, in which the DIB-R outputs were passed through a discriminator network for analysis.

If a rendered object was similar enough to an input object, then DIB-R’s output passed the discriminator. If not, the output was rejected and the CNN had to generate ever more similar versions until it was accepted by the discriminator. Over time, the CNN learned to output realistic renderings. Further training is required to generate shapes outside of the training data, we note.

As we mentioned above, DIB-R could help robots better detect their environments, Nvidia’s Lauren Finkle said: “For an autonomous robot to interact safely and efficiently with its environment, it must be able to sense and understand its surroundings. DIB-R could potentially improve those depth perception capabilities.”

The research will be presented at the Conference on Neural Information Processing Systems in Vancouver, Canada, this week.

Source: I’ll take your frame to another dimension, pay close attention: This AI auto-generates 3D objects from 2D snaps • The Register

New Plundervolt attack impacts Intel CPUs SGX

Academics from three universities across Europe have disclosed today a new attack that impacts the integrity of data stored inside Intel SGX, a highly-secured area of Intel CPUs.

The attack, which researchers have named Plundervolt, exploits the interface through which an operating system can control an Intel processor’s voltage and frequency — the same interface that allows gamers to overclock their CPUs.

Academics say they discovered that by tinkering with the amount of voltage and frequency a CPU receives, they can alter bits inside SGX to cause errors that can be exploited at a later point after the data has left the security of the SGX enclave.

They say Plundervolt can be used to recover encryption keys or introduce bugs in previously secure software.

Source: New Plundervolt attack impacts Intel CPUs | ZDNet

Researchers report breakthrough in ‘distributed deep learning’

Online shoppers typically string together a few words to search for the product they want, but in a world with millions of products and shoppers, the task of matching those unspecific words to the right product is one of the biggest challenges in information retrieval.

Using a divide-and-conquer approach that leverages the power of compressed sensing, computer scientists from Rice University and Amazon have shown they can slash the amount of time and computational resources it takes to train computers for product search and similar “extreme classification problems” like speech translation and answering general questions.

The research will be presented this week at the 2019 Conference on Neural Information Processing Systems (NeurIPS 2019) in Vancouver. The results include tests performed in 2018 when lead researcher Anshumali Shrivastava and lead author Tharun Medini, both of Rice, were visiting Amazon Search in Palo Alto, California.

In tests on an Amazon search dataset that included some 70 million queries and more than 49 million products, Shrivastava, Medini and colleagues showed their approach of using “merged-average classifiers via hashing,” (MACH) required a fraction of the resources of some state-of-the-art commercial systems.

“Our training times are about 7-10 times faster, and our memory footprints are 2-4 times smaller than the best baseline performances of previously reported large-scale, distributed deep-learning systems,” said Shrivastava, an assistant professor of computer science at Rice.

[…]

“Extreme classification problems” are ones with many possible outcomes, and thus, many parameters. Deep learning models for extreme classification are so large that they typically must be trained on what is effectively a supercomputer, a linked set of graphics processing units (GPU) where parameters are distributed and run in parallel, often for several days.

“A neural network that takes search input and predicts from 100 million outputs, or products, will typically end up with about 2,000 parameters per product,” Medini said. “So you multiply those, and the final layer of the neural network is now 200 billion parameters. And I have not done anything sophisticated. I’m talking about a very, very dead simple neural network model.”

“It would take about 500 gigabytes of memory to store those 200 billion parameters,” Medini said. “But if you look at current training algorithms, there’s a famous one called Adam that takes two more parameters for every parameter in the model, because it needs statistics from those parameters to monitor the training process. So, now we are at 200 billion times three, and I will need 1.5 terabytes of working memory just to store the model. I haven’t even gotten to the training data. The best GPUs out there have only 32 gigabytes of memory, so training such a model is prohibitive due to massive inter-GPU communication.”

MACH takes a very different approach. Shrivastava describes it with a randomly dividing the 100 million products into three classes, which take the form of buckets. “I’m mixing, let’s say, iPhones with chargers and T-shirts all in the same bucket,” he said. “It’s a drastic reduction from 100 million to three.”

In the thought experiment, the 100 million products are randomly sorted into three buckets in two different worlds, which means that products can wind up in different buckets in each world. A classifier is trained to assign searches to the buckets rather than the products inside them, meaning the classifier only needs to map a search to one of three classes of product.

“Now I feed a search to the classifier in world one, and it says bucket three, and I feed it to the classifier in world two, and it says bucket one,” he said. “What is this person thinking about? The most probable class is something that is common between these two buckets. If you look at the possible intersection of the buckets there are three in world one times three in world two, or nine possibilities,” he said. “So I have reduced my search space to one over nine, and I have only paid the cost of creating six classes.”

Adding a third world, and three more buckets, increases the number of possible intersections by a factor of three. “There are now 27 possibilities for what this person is thinking,” he said. “So I have reduced my space by one over 27, but I’ve only paid the cost for nine classes. I am paying a cost linearly, and I am getting an exponential improvement.”

In their experiments with Amazon’s training database, Shrivastava, Medini and colleagues randomly divided the 49 million products into 10,000 classes, or buckets, and repeated the process 32 times. That reduced the number of parameters in the model from around 100 billion to 6.4 billion. And training the model took less time and less memory than some of the best reported training times on models with comparable parameters, including Google’s Sparsely-Gated Mixture-of-Experts (MoE) model, Medini said.

He said MACH’s most significant feature is that it requires no communication between parallel processors. In the thought experiment, that is what’s represented by the separate, independent worlds.

“They don’t even have to talk to each other,” Medini said. “In principle, you could train each of the 32 on one GPU, which is something you could never do with a nonindependent approach.”

Source: Researchers report breakthrough in ‘distributed deep learning’

ICANN demands transparency from others over .org deal. As for itself… well, not so much

Three weeks after the Internet Society announced the controversial sale of the .org internet registry to an unknown private equity firm, the organization that has to sign off on the deal has finally spoken publicly.

In a letter [PDF] titled “Transparency” from the general counsel of domain name system overseer ICANN to the CEOs of the Internet Society (ISOC) and .org registry operator PIR, the organization takes issue with how the proposed sale has been handled and notes that it is “uncomfortable” at the lack of transparency.

The letter, dated Monday and posted today with an accompanying blog post, notes that ICANN will be sending a “detailed request for additional information” and encourages the organizations “to answer these questions fully and as transparently as possible.”

As ICANN’s chairman previously told The Register, the organization received an official request to change ownership of PIR from ISOC to Ethos Capital in mid-November but denied ICANN’s request to make it public.

The letter presses ISOC/PIR to make that request public. “While PIR has previously declined our request to publish the Request, we urge you to reconsider,” the letter states. “We also think there would be great value for us to publish the questions that you are asked and your answers to those questions.”

Somewhat unusually it repeats the same point a second time: “In light of the level of interest in the recently announced acquisition of PIR, both within the ICANN community and more generally, we continue to believe that it is critical that your Request, and the questions and answers in follow up to the Request, and any other related materials, be made Public.”

Third time lucky

And then, stressing the same point a third time, the letter notes that on a recent webinar about the sale organized by concerned non-profits that use .org domains, ISOC CEO Andrew Sullivan said he wasn’t happy about the level of secrecy surrounding the deal.

From the ICANN letter: “As you, Andrew, ISOC’s CEO stated publicly during a webcast meeting… you are uncomfortable with the lack of transparency. Many of us watching the communications on this transaction are also uncomfortable.

“In sum, we again reiterate our belief that it is imperative that you commit to completing this process in an open and transparent manner, starting with publishing the Request and related material, and allowing us to publish our questions to you, and your full Responses.”

Here is what Sullivan said on the call [PDF]: “I do appreciate, however, that this creates a level of uncertainty, because people are uncomfortable with things that are done in secret like that. I get it. I can have the same reaction what I’m not included in a decision, but that is the reason we have trustees. That’s the reason that we have our trustees selected by our community. And I believe that we made the right decision.”

As ICANN noted, there remain numerous questions over the proposed sale despite both ISOC and Ethos Capital holding meetings with concerned stakeholders, and ISOC’s CEO agreeing to an interview with El Reg.

One concerned .org owner is open-source organization Mozilla, which sent ICANN a letter noting that it “remains concerned that the nature of the modified contractual agreement between ICANN and the registry does not contain sufficient safeguards to ensure that the promises we hear today will be kept.”

It put forward a series of unanswered questions that it asked ICANN to request of PIR. They include [PDF] questions over the proposed “stewardship council” that Ethos Capital has said it will introduce to make sure the rights of .org domain holders are protected, including its degree of independence; what assurances there are that Ethos Capital will actually stick to its implied promise that it won’t increase .org prices by more than 10 per cent per year; and details around its claim that PIR will become a so-called B Corp – a designation that for-profit companies can apply for if they wish to indicate a wider public interest remit.

Connections

While those questions dig into the future running of the .org registry, they do not dig into the unusual connections between the CEOs of ISOC, PIR and Ethos Capital, as well as their advisors.

The CEO of ISOC, Andrew Sullivan worked for a company called Afilias between 2002 and 2008. It was Afilias that persuaded ISOC to apply to run the .org registry in the first place and Sullivan is credited with writing significant parts of its final application. Afilias has run the .org back-end since 2003. Sullivan became ISOC CEO in June 2018.

The CEO of PIR, Jonathon Nevett, took over the job in December 2018. Immediately prior to that, he was Executive VP for a registry company called Donuts, which he also co-founded. Donuts was sold in September 2018 to a private equity company called Abry Partners.

At Abry Partners at the time was Eric Brooks, who left the company after 20 years at some point in 2019 to become the CEO of Ethos Capital – the company purchasing PIR. Also at Abry Partners at the time was Fadi Chehade, a former CEO of ICANN. Chehade is credited as being a “consultant” over the sale of PIR to Ethos Capital but records demonstrate that Chehade registered its domain name – ethoscapital.com – personally.

Chehade is also thought to have personally registered Ethos Capital as a Delaware corporation on May 14 this year: an important date because it was the day after his former organization, ICANN, indicated it was going to approve the lifting of price caps on .org domains, against the strong opposition of the internet community.

Now comes the ICA

As well as Mozilla’s questions, there is another series of questions [PDF] over the sale from the Internet Commerce Association (ICA) that are pointed at ICANN itself.

Those questions focus on the timeline of information: what ICANN knew about the proposed sale and when; and whether it was aware of the intention to sell PIR when it approved lifting price caps on the .org registry.

It also asked various governance questions about ICANN including why the renewed .org contract was not approved by the ICANN board, the involvement of former ICANN executives, including Chehade and former senior vice president Nora Abusitta-Ouri who is “chief purpose officer” of Ethos Capital, and what policies ICANN has in place over “cooling off periods” for former execs.

While going out of its way to criticize ISOC and PIR for their lack of transparency and while claiming in the letter to ISOC that “transparency is a cornerstone of ICANN and how ICANN acts to protect the public interest while performing its role,” ICANN has yet to answer questions over its own role.

Source: ICANN demands transparency from others over .org deal. As for itself… well, not so much • The Register

On-Device, Real-Time Hand Tracking with MediaPipe

The ability to perceive the shape and motion of hands can be a vital component in improving the user experience across a variety of technological domains and platforms. For example, it can form the basis for sign language understanding and hand gesture control, and can also enable the overlay of digital content and information on top of the physical world in augmented reality. While coming naturally to people, robust real-time hand perception is a decidedly challenging computer vision task, as hands often occlude themselves or each other (e.g. finger/palm occlusions and hand shakes) and lack high contrast patterns. Today we are announcing the release of a new approach to hand perception, which we previewed CVPR 2019 in June, implemented in MediaPipe—an open source cross platform framework for building pipelines to process perceptual data of different modalities, such as video and audio. This approach provides high-fidelity hand and finger tracking by employing machine learning (ML) to infer 21 3D keypoints of a hand from just a single frame. Whereas current state-of-the-art approaches rely primarily on powerful desktop environments for inference, our method achieves real-time performance on a mobile phone, and even scales to multiple hands. We hope that providing this hand perception functionality to the wider research and development community will result in an emergence of creative use cases, stimulating new applications and new research avenues.

3D hand perception in real-time on a mobile phone via MediaPipe. Our solution uses machine learning to compute 21 3D keypoints of a hand from a video frame. Depth is indicated in grayscale.

An ML Pipeline for Hand Tracking and Gesture Recognition Our hand tracking solution utilizes an ML pipeline consisting of several models working together:

  • A palm detector model (called BlazePalm) that operates on the full image and returns an oriented hand bounding box.
  • A hand landmark model that operates on the cropped image region defined by the palm detector and returns high fidelity 3D hand keypoints.
  • A gesture recognizer that classifies the previously computed keypoint configuration into a discrete set of gestures.

This architecture is similar to that employed by our recently published face mesh ML pipeline and that others have used for pose estimation. Providing the accurately cropped palm image to the hand landmark model drastically reduces the need for data augmentation (e.g. rotations, translation and scale) and instead allows the network to dedicate most of its capacity towards coordinate prediction accuracy.

Hand perception pipeline overview.

BlazePalm: Realtime Hand/Palm Detection To detect initial hand locations, we employ a single-shot detector model called BlazePalm, optimized for mobile real-time uses in a manner similar to BlazeFace, which is also available in MediaPipe. Detecting hands is a decidedly complex task: our model has to work across a variety of hand sizes with a large scale span (~20x) relative to the image frame and be able to detect occluded and self-occluded hands. Whereas faces have high contrast patterns, e.g., in the eye and mouth region, the lack of such features in hands makes it comparatively difficult to detect them reliably from their visual features alone. Instead, providing additional context, like arm, body, or person features, aids accurate hand localization. Our solution addresses the above challenges using different strategies. First, we train a palm detector instead of a hand detector, since estimating bounding boxes of rigid objects like palms and fists is significantly simpler than detecting hands with articulated fingers. In addition, as palms are smaller objects, the non-maximum suppression algorithm works well even for two-hand self-occlusion cases, like handshakes. Moreover, palms can be modelled using square bounding boxes (anchors in ML terminology) ignoring other aspect ratios, and therefore reducing the number of anchors by a factor of 3-5. Second, an encoder-decoder feature extractor is used for bigger scene context awareness even for small objects (similar to the RetinaNet approach). Lastly, we minimize the focal loss during training to support a large amount of anchors resulting from the high scale variance. With the above techniques, we achieve an average precision of 95.7% in palm detection. Using a regular cross entropy loss and no decoder gives a baseline of just 86.22%.

Source: Google AI Blog: On-Device, Real-Time Hand Tracking with MediaPipe

Uninstall AVAST and AVG free anti-virus: they are massively slurping your data! Mozilla and Opera have removed them from their stores

Two browsers have yanked Avast and AVG online security extensions from their web stores after a report revealed that they were unnecessarily sucking up a ton of data about users’ browsing history.

Wladimir Palant, the creator behind Adblock Plus, initially surfaced the issue—which extends to Avast Online Security and Avast SafePrice as well as Avast-owned AVG Online Security and AVG SafePrice extensions—in a blog post back in October but this week flagged the issue to the companies themselves. In response, both Mozilla and Opera yanked the extensions from their stores. However, as of Wednesday, the extensions curiously remained in Google’s extensions store.

Using dev tools to examine network traffic, Palant was able to determine that the extensions were collecting an alarming amount of data about users’ browsing history and activity, including URLs, where you navigated from, whether the page was visited in the past, the version of browser you’re using, country code, and, if the Avast Antivirus is installed, the OS version of your device, among other data. Palant argued the data collection far exceeded what was necessary for the extensions to perform their basic jobs.

Source: Avast and AVG Plugins Reportedly Doing Some Shady Data Collection

NASA to launch projectile (DART) to see if it can deflect asteroids

DART is a planetary defense-driven test of technologies for preventing an impact of Earth by a hazardous asteroid.  DART will be the first demonstration of the kinetic impactor technique to change the motion of an asteroid in space.  The DART mission is in Phase C, led by APL and managed under NASA’s Solar System Exploration Program at Marshall Space Flight Center for NASA’s Planetary Defense Coordination Office and the Science Mission Directorate’s Planetary Science Division at NASA Headquarters in Washington, DC.

DART Spacecraft Bus
Two different views of the DART spacecraft. The DRACO (Didymos Reconnaissance & Asteroid Camera for OpNav) imaging instrument is based on the LORRI high-resolution imager from New Horizons. The left view also shows the Radial Line Slot Array (RLSA) antenna with the ROSAs (Roll-Out Solar Arrays) rolled up. The view on the right shows a clearer view of the NEXT-C ion engine.

The binary near-Earth asteroid (65803) Didymos is the target for the DART demonstration. While the Didymos primary body is approximately 780 meters across, its secondary body (or “moonlet”) is about 160-meters in size, which is more typical of the size of asteroids that could pose the most likely significant threat to Earth. The Didymos binary is being intensely observed using telescopes on Earth to precisely measure its properties before DART arrives.

Didymos and its moonlet
Fourteen sequential Arecibo radar images of the near-Earth asteroid (65803) Didymos and its moonlet, taken on 23, 24 and 26 November 2003. NASA’s planetary radar capabilities enable scientists to resolve shape, concavities, and possible large boulders on the surfaces of these small worlds. Photometric lightcurve data indicated that Didymos is a binary system, and radar imagery distinctly shows the secondary body.
Didymos system
Simulated image of the Didymos system, derived from photometric lightcurve and radar data. The primary body is about 780 meters in diameter and the moonlet is approximately 160 meters in size. They are separated by just over a kilometer. The primary body rotates once every 2.26 hours while the tidally locked moonlet revolves about the primary once every 11.9 hours. Almost one sixth of the known near-Earth asteroid (NEA) population are binary or multiple-body systems.
Credits: Naidu et al., AIDA Workshop, 2016
DART spacecraft with the Roll Out Solar Arrays (rOSA)
Illustration of the DART spacecraft with the Roll Out Solar Arrays (ROSA) extended. Each of the two ROSA arrays in 8.6 meters by 2.3 meters.

The DART spacecraft will achieve the kinetic impact deflection by deliberately crashing itself into the moonlet at a speed of approximately 6.6 km/s, with the aid of an onboard camera (named DRACO) and sophisticated autonomous navigation software. The collision will change the speed of the moonlet in its orbit around the main body by a fraction of one percent, but this will change the orbital period of the moonlet by several minutes – enough to be observed and measured using telescopes on Earth.

Once launched, DART will deploy Roll Out Solar Arrays (ROSA) to provide the solar power needed for DART’s electric propulsion system.  The DART spacecraft will demonstrate the NASA Evolutionary Xenon Thruster – Commercial (NEXT-C) solar electric propulsion system as part of its in-space propulsion.  NEXT-C is a next-generation system based on the Dawn spacecraft propulsion system, and was developed at NASA’s Glenn Research Center in Cleveland, Ohio.  By utilizing electric propulsion, DART could benefit from significant flexibility to the mission timeline while demonstrating the next generation of ion engine technology, with applications to potential future NASA missions.

the ROSA array on the ISS
The ROSA array was tested on board the International Space Station (ISS) in June 2017.

Once launched, DART will deploy Roll Out Solar Arrays (ROSA) to provide the solar power needed for DART’s electric propulsion system.  The DART spacecraft will demonstrate the NASA Evolutionary Xenon Thruster – Commercial (NEXT-C)solar electric propulsion system as part of its in-space propulsion.  NEXT-C is a next-generation system based on the Dawn spacecraft propulsion system, and was developed at NASA’s Glenn Research Center in Cleveland, Ohio.  By utilizing electric propulsion, DART could benefit from significant flexibility to the mission timeline while demonstrating the next generation of ion engine technology, with applications to potential future NASA missions.

The DART spacecraft launch window begins in late July 2021.  DART will launch aboard a SpaceX Falcon 9 rocket from Vandenberg Air Force Base, California. After separation from the launch vehicle and over a year of cruise it will intercept Didymos’ moonlet in late September 2022, when the Didymos system is within 11 million kilometers of Earth, enabling observations by ground-based telescopes and planetary radar to measure the change in momentum imparted to the moonlet.

Source: Double Asteroid Redirection Test (DART) Mission | NASA

Bol.com partner Toppie Speelgoed loses 10000 Belgian and Dutch customer records, now for sale on hacker forum

Personal information and what they bought, where it was delivered to.

De gegevens van vermoedelijk bijna 10.000 Belgische en Nederlandse klanten die een paar jaar geleden online speelgoed kochten, worden door een hacker te koop aangeboden op het internet. Dat blijkt uit onderzoek van VRT NWS. Het gaat om persoonlijke gegevens en bepaalde aankopen van mensen. De overgrote meerderheid van de producten werden gekocht bij een lokale Nederlandse ondernemer via onder meer webwinkel Bol.com. Die hebben meteen een onderzoek geopend naar de ondernemer waar het lek bleek te zitten.

Het bestand met klantengegevens wordt aangeboden op een gespecialiseerd hackersforum op het internet, waar de oplichter beweert een ‘bol.com-database’ te hebben.

In het bestand kan je zien wat mensen gekocht hebben, wat hun voor- en achternaam is en soms ook wat de aankoop kost. Daarnaast zijn ook bezorggegevens beschikbaar. Ook zie je welke betalingswijze mensen hebben gekozen, zoals een kredietkaart of bancontact.

Lek bij Toppie Speelgoed, externe partner Bol.com

Onderzoek leert dat het bestand inderdaad aankoopgegevens bevat van mensen die via Bol.com speelgoed kochten. Na contact met Bol.com en een intern onderzoek bij de webshop zelf blijkt dat het datalek zit bij een partner van Bol.com die speelgoed verkoopt op onder meer bol.com en eigen webshops. Het gaat om Toppie Speelgoed. Wie rechtstreeks bij Toppie Speelgoed kocht, duikt ook met e-mailadres en telefoonnummer op in de lijst, als dat bij de aankoop werd achtergelaten. Wie via Bol.com een product kocht, enkel met naam en afleveradres. Dat komt omdat Bol.com slechts beperkte gegevens naar externe partners stuurt.

Source: Belgische en Nederlandse klantengegevens van speelgoedwinkel online te koop | VRT NWS

Budget Energy and NLE leak 29000 customer records – names, adresses, possibly phone numbers and bank accounts

De persoonsgegevens van mogelijk 29.000 klanten van energiebedrijven Budget Energie en NLE liggen op straat. Naast namen en adressen is er kans dat er ook telefoonnummers en bankrekeningnummers zijn gelekt. De data is niet per ongeluk gelekt, het gaat volgens het bedrijf om een moedwillige diefstal.

Moederbedrijf Nuts Groep heeft klanten van Budget Energie en NLE vanmorgen per e-mail op de hoogte gebracht van het datalek. Volgens het bedrijf gaat het niet om een softwarelek maar om ‘ongeautoriseerde toegang’ tot contractgegevens.

Politie-onderzoek

Het gaat om mogelijk 29.000 van de in totaal 700.000 klanten van de energiebedrijven. “Er is een onderzoek gestart door de politie. Zo lang dat loopt, doen wij geen uitspraken over de oorzaak van het lek en het aantal betrokkenen”, zegt Babette Huberts, manager legal van Nuts Groep tegen RTL Z. Ook wil Huberts niet kwijt hoe het lek is ontdekt.

Later op de dag heeft Huberts laten weten dat het gaat om een moedwillige actie.

Source: Datadiefstal bij Budget Energie en NLE: mogelijk 29.000 klanten geraakt | RTLZ

Reddit Uncovers Russian Interference Campaign Ahead of Pivotal UK Election

Fears of Russian interference ahead of a heated U.K. election were all but confirmed this week with a Reddit post.

In a post Friday, Reddit announced that its internal investigation found evidence that an account purportedly linked to Russian disinformation campaign was behind last month’s leak of contentious US-UK trade documents on the platform.

“We were recently made aware of a post on Reddit that included leaked documents from the UK. We investigated this account and the accounts connected to it, and today we believe this was part of a campaign that has been reported as originating from Russia,” Reddit wrote.

The online message board went on to say it’s banned 61 accounts and suspended one subreddit, r/ukwhistleblower, behind the campaign for violating the platform’s policies against vote manipulation and misuse. Reddit also purportedly found evidence linking this operation to another group behind similar foreign interference on Facebook earlier this year. The Atlantic Council’s dubbed them “Secondary Infektion” in reference to a misinformation campaign from the Soviet era.

“Suspect accounts on Reddit were recently reported to us, along with indicators from law enforcement, and we were able to confirm that they did indeed show a pattern of coordination,” Reddit said. “We were then able to use these accounts to identify additional suspect accounts that were part of the campaign on Reddit. This group provides us with important attribution for the recent posting of the leaked UK documents, as well as insights into how adversaries are adapting their tactics.”

The account behind the original Reddit leak as well as a number of others that reposted the documents and manipulated its upvotes and karma (ways to earn a post a more prominent placement in a subreddit) all used identical tactics as Secondary Infektion, according to Reddit, “causing us to believe that this was indeed tied to the original group.”

The papers in question detail trade talks between America and the UK and have launched a fiery debate among British officials leading up to the country’s general election. Labor Party leader Jeremy Corbyn claims these documents prove officials plan to put the country’s National Healthcare Service is at risk of being privatized in the event of a post-Brexit trade agreement with America. Prime Minister Boris Johnson has denied this, saying NHS wouldn’t be on the table in any future trade negotiations.

This isn’t the first time Reddit’s struggled with sussing out foreign propaganda campaigns on its platform. Russian influence operations have become a particularly insidious and reoccurring problem, leading Reddit to ban 944 “suspicious” accounts in April 2018 after purportedly tracing them back to Russia’s Internet Research Industry (IRA), the infamous troll factory behind pro-Trump efforts during the 2016 presidential campaign.

Later that September, Reddit users began to speculate that the notoriously awful (and now, thankfully, quarantined) subreddit r/The_Donald had become infiltrated by Russian trolls as well. Suspicions began circulating among its three-quarters of a million subscribers after a viral post documented clear signs of a pattern: The same few articles from websites affiliated with the IRA were being upvoted and shared in the forum thousands of times, and it’d been going on for years, according to a Buzzfeed News report. Reddit later issued a platform-wide ban for three of the trolls’ most commonly linked websites, USA Really, GEOTUS.band and GEOTUS.army.

A separate investigation Reddit launched around that same time uncovered 143 accounts linked to another influence operation reportedly targeting polarized subreddits on both sides of the aisle with pro-Iranian political narratives. Reddit began its inquiry after cybersecurity group FireEye released a report detailing just how far the campaign’s influence spanned, as bad actors were purportedly “leveraging a network of inauthentic news sites and clusters of associated accounts across multiple social media platforms.” Based on these findings, Facebook, Twitter, and Google also subsequently removed a bevy of accounts affiliated with Iran and Russia on their respective platforms.

Source: Reddit Uncovers Russian Interference Campaign Ahead of Pivotal UK Election

Why Are Cops Around the World Using This Outlandish Mind-Reading Tool That Doesn’t Work?

ProPublica has determined that dozens of state and local agencies have purchased “SCAN” training from a company called LSI for reviewing a suspect’s written statements — even though there’s no scientific evidence that it works. Local, state and federal agencies from the Louisville Metro Police Department to the Michigan State Police to the U.S. State Department have paid for SCAN training. The LSI website lists 417 agencies nationwide, from small-town police departments to the military, that have been trained in SCAN — and that list isn’t comprehensive, because additional ones show up in procurement databases and in public records obtained by ProPublica. Other training recipients include law enforcement agencies in Australia, Belgium, Canada, Israel, Mexico, the Netherlands, Singapore, South Africa and the United Kingdom, among others…

For Avinoam Sapir, the creator of SCAN, sifting truth from deception is as simple as one, two, three.

1. Give the subject a pen and paper.
2. Ask the subject to write down his/her version of what happened.
3. Analyze the statement and solve the case.

Those steps appear on the website for Sapir’s company, based in Phoenix. “SCAN Unlocks the Mystery!” the homepage says, alongside a logo of a question mark stamped on someone’s brain. The site includes dozens of testimonials with no names attached. “Since January when I first attended your course, everybody I meet just walks up to me and confesses!” one says. [Another testimonial says “The Army finally got its money’s worth…”] SCAN saves time, the site says. It saves money. Police can fax a questionnaire to a hundred people at once, the site says. Those hundred people can fax it back “and then, in less than an hour, the investigator will be able to review the questionnaires and solve the case.”
In 2009 the U.S. government created a special interagency task force to review scientific studies and independently investigate which interrogation techniques worked, assessed by the FBI, CIA and the U.S. Department of Defense. “When all 12 SCAN criteria were used in a laboratory study, SCAN did not distinguish truth-tellers from liars above the level of chance,” the review said, also challenging two of the method’s 12 criteria. “Both gaps in memory and spontaneous corrections have been shown to be indicators of truth, contrary to what is claimed by SCAN.”
In a footnote, the review identified three specific agencies that use SCAN: the FBI, CIA and U.S. Army military intelligence, which falls under the Department of Defense…

In 2016, the same year the federal task force released its review of interrogation techniques, four scholars published a study on SCAN in the journal Frontiers in Psychology. The authors — three from the Netherlands, one from England — noted that there had been only four prior studies in peer-reviewed journals on SCAN’s effectiveness. Each of those studies (in 1996, 2012, 2014 and 2015) concluded that SCAN failed to help discriminate between truthful and fabricated statements. The 2016 study found the same. Raters trained in SCAN evaluated 234 statements — 117 true, 117 false. Their results in trying to separate fact from fiction were about the same as chance….

Steven Drizin, a Northwestern University law professor who specializes in wrongful convictions, said SCAN and assorted other lie-detection tools suffer from “over-claim syndrome” — big claims made without scientific grounding. Asked why police would trust such tools, Drizin said: “A lot has to do with hubris — a belief on the part of police officers that they can tell when someone is lying to them with a high degree of accuracy. These tools play in to that belief and confirm that belief.”
SCAN’s creator “declined to be interviewed for this story,” but they spoke to some users of the technique. Travis Marsh, the head of an Indiana sheriff’s department, has been using the tool for nearly two decades, while acknowledging that he can’t explain how it works. “It really is, for lack of a better term, a faith-based system because you can’t see behind the curtain.”

Pro Publica also reports that “Years ago his wife left a note saying she and the kids were off doing one thing, whereas Marsh, analyzing her writing, could tell they had actually gone shopping. His wife has not left him another note in at least 15 years…”

Source: ‘Why Are Cops Around the World Using This Outlandish Mind-Reading Tool?’ – Slashdot

Most People Experiencing Homelessness Have Had a Traumatic Brain Injury, Study Finds

The study, published in Lancet Public Health on Monday, is a review of existing research that looked at how commonly traumatic brain injuries happen among people. It specifically included studies that also took into account people’s housing situation. These studies involved more than 11,000 people who were fully or partially homeless at the time and living in the U.S., UK, Japan, or Canada. And 26 of the 38 originally reviewed studies were included in a deeper meta-analysis.

Taken as a whole, the review found that around 53 percent of homeless people had experienced a traumatic brain injury (TBI) at some time in their lives. Among people who reported how seriously they had been hurt, about a quarter had experienced a moderate to severe head injury. Compared to the average person, the authors noted, homeless people are over twice as likely to have experienced any sort of head injuries and nearly 10 times as likely to have had a moderate to severe one.

“TBI is prevalent among homeless and marginally housed individuals and might be a common factor that contributes to poorer health and functioning than in the general population,” the researchers wrote.

Source: Most People Experiencing Homelessness Have Had a Traumatic Brain Injury, Study Finds

Sundar Pichai Becomes Alphabet CEO, Larry Page to Step Back

Google CEO Sundar Pichai is adding another responsibility to his job: Pichai will also be the CEO of parent holding company Alphabet going forward, taking the helm from co-founder and longtime CEO Larry Page.

Additionally, co-founder Sergey Brin will be resigning from his post as the president of Alphabet. Brin and Page jointly announced the leadership change in a blog post Tuesday afternoon, writing:

“Alphabet and Google no longer need two CEOs and a President. Going forward, Sundar will be the CEO of both Google and Alphabet. He will be the executive responsible and accountable for leading Google, and managing Alphabet’s investment in our portfolio of Other Bets.”

“We are deeply committed to Google and Alphabet for the long term, and will remain actively involved as Board members, shareholders and co-founders. In addition, we plan to continue talking with Sundar regularly, especially on topics we’re passionate about,” the duo wrote.

Pichai has been with Google since 2004, and oversaw several of the company’s key products before becoming CEO of Google in 2015 when the search giant reorganized its corporate structure.

Source: Sundar Pichai Becomes Alphabet CEO, Larry Page to Step Back – Variety

Using LimeGPS to spoof a fake location to any GPS device inside the room

This page details experiences using LimeSDR to simulate GPS.
Note, update (Aug 15, 2017) – The center frequency should be corrected below to 1575.42MHz. It would marginally work with the original 1545.42 but 1575.42 is rock solid gps sim performance.

These experiments were inspired by the excellent procedure written up here [1]. We want to use a similar process to target real devices, and have had luck with a qstarz 818XT bluetooth gps device, and a Galaxy S4 after using airplane mode, restart and patience. The coverage area is at least a room, even with -42db PAD attenuation. Here I am visiting Trinity College Cambridge with the qstarz and it’s app.

TrinityCollege s1r1.jpg

2 Setup

Software to git clone – https://github.com/osqzss/gps-sdr-sim
Follow the instructions on the github page for how to compile, it is a very easy procedure on Ubuntu with build-essential package installed.

$ gcc gpssim.c -lm -O3 -o gps-sdr-sim

Note there is a setting in gpssim.h for USER_MOTION_SIZE default 3000 max duration at 10MHz (300 seconds). You can increase that to 6000 or more to get longer default running times.
The default sample rate for gps-sdr-sim is 2.6e6, 16 bit I/Q data format. LimeSDR is known to work with 10e6, and 8 bit interleaved I/Q data format converted to complex float in the graph. That is too slow to generate in real time, depending on your cpu, so one strategy is to create an rf data file non-realtime and then transmit that with a simple gnuradio python script created in gnuradio-companion. The gps-fake-out project [2] links to a grc file, or it’s easy to create your own. That example project simultaneous transmits the rf data file and also collects rf data for later analysis with Matlab and SoftGNSS. I found it useful to replace the file sink with an fft display slightly offset, and 20e6 input rate.

The last puzzle piece needed are ephemeris data to feed gps-sdr-sim (required), RINEX v2 format ( read all about it here [3] – especially the file name format). There is a global network of International GNSS Service installations [4] providing up to date data, which may be accessed with anonymous ftp from the Goddard Space Flight Center

ftp -p cddis.gsfc.nasa.gov

Login anonymous ‘ftp’ and email for password. Use the merged GPS broadcast ephemeris file found in /pub/gps/data/daily/2017/brdc/. The filename convention is

'brdc' + <3 digit day of year> + '0.' +  <2 digit year> + 'n.Z' 

‘n’ for gps (don’t get the ‘g’ files, that is glonass), and ‘Z’ for compressed. Day of year can be found with

$ date +%j

Get yesterdays – for example, today, Feb 28, 2017, I would get ‘brdc0580.17n.Z’, uncompress

$ uncompress brdc0580.17n.Z

Pick a place – All you need now is a location to go, Google maps is good for entering latitude,longitude and seeing where it goes, or pick a spot, right click and pick “Directions to here” and a little url hacking to get the coordinates, like 1.8605853,73.5213033 for a spot in the Maldives.

To do: use the gpssim with a user motion file instead of a static location, there is even support for Google Earth and SatGen software.

3 Execution

Get ready to host some large files, ranging from 5 to 20GB in size, if going with a larger USER_MOTION_SIZE full duration and/or trying 16 bit. Create the rf data file, using 10e6 samples per second in interleaved 8bit I/Q sample format, using the day of year 059 merged broadcast ephemeris file:

$ ./gps-sdr-sim -e brdc0590.17n -l 1.8605853,73.5213033,5 -t 2017/02/28,22:00:00 -o gpssim_10M.s8 -s 10e6 -b 8 -v
Using static location mode.
     9.313e-09    0.000e+00   -5.960e-08    0.000e+00
     9.011e+04    0.000e+00   -1.966e+05    0.000e+00
     1.86264514923e-09   1.77635683940e-15     319488      1938
    18
Start time = 2017/02/28,22:00:00 (1938:252000)
Duration = 600.0 [sec]
02   78.1   5.0  25142702.4   4.5
04  305.9  10.6  24630434.2   4.0
10  244.0  20.9  23656748.6   3.2
12  174.6  31.9  22801339.9   2.6
13   59.8  27.2  23001942.1   2.8
15   80.1  60.3  20615340.0   1.7
18  273.8  42.7  21969027.9   2.1
20    3.4  36.7  22141445.5   2.3
21  322.3  14.4  24860118.2   3.7
24  152.1  21.2  23574508.7   3.2
25  227.1  49.6  21537006.8   1.9
26  310.2   0.2  25799081.3   5.1
29    2.7  52.0  21259731.6   1.8
32  211.7   0.4  25733242.7   5.0
Time into run =  1.6

then get some coffee – it’s a slow single threaded process which is why we have to create a data file and then transmit it instead of realtime radio broadcast. When done make sure your gnuradio-companion graph is setup with the right source filename, data types, sink driver, antenna, etc. Anything miss-matched can cause it to frustratingly run but not work. Grc xmit only.jpg

 self.blocks_file_source_0 = blocks.file_source(gr.sizeof_char*1, "/home/chuck/src/gps-sdr-sim/gpssim_10M.s8", False)
 self.blocks_interleaved_char_to_complex_0 = blocks.interleaved_char_to_complex(False)
 self.osmosdr_sink_0 = osmosdr.sink( args="numchan=" + str(1) + " " + "device=soapy,lime=0" ) 
 self.osmosdr_sink_0.set_antenna("BAND1", 0)

Then click the run button or create top_block.py and run it on the command line and your gps simulated broadcast should be visible to devices a few inches away from the antenna. You can play with various gain settings in the sink block – looks like a setting of ‘0’ sets the power amp driver to -52 db attenuatin and a setting of 10 you get -42 db:

 [INFO] SoapyLMS7::setGain(Tx, 0, PAD, -42 dB)

4 Results

Now with emissions in progress try various devices and experience the wonders of rf, distance, position orientation, how you hold you hand, etc can all effect the SNR. It may take some trickery as many receivers have build in processes to speed up signal lock, such as obtaining their own ephemeris etc. For the smart phone Galaxy S4 I put it in airplane mode, restart, open GpsTEST app and altho it found many satellites very fast, it took a long time to actually get a fix. Just found the QStarz snr jumped considerably when a hand is placed slightly behind it.
Anyway, here’s the screenshots of simulating location in the Maldives created above, using the QStarz app:

Maldives Sats s1.jpg Maldives Map s1.jpg

Source: GPS Simulation – Myriad-RF Wiki

All new cell phone users in China must now have their face scanned, as do all US citizens entering or leaving the US (as well as all non-US citizens)

Customers in China who buy SIM cards or register new mobile-phone services must have their faces scanned under a new law that came into effect yesterday. China’s government says the new rule, which was passed into law back in September, will “protect the legitimate rights and interest of citizens in cyberspace.”

A controversial step: It can be seen as part of an ongoing push by China’s government to make sure that people use services on the internet under their real names, thus helping to reduce fraud and boost cybersecurity. On the other hand, it also looks like part of a drive to make sure every member of the population can be surveilled.

How do Chinese people feel about it? It’s hard to say for sure, given how strictly the press and social media are regulated, but there are hints of growing unease over the use of facial recognition technology within the country. From the outside, there has been a lot of concern over the role the technology will play in the controversial social credit system, and how it’s been used to suppress Uighur Muslims in the western region of Xinjiang.

Source: All new cell phone users in China must now have their face scanned – MIT Technology Review

Homeland Security wants to expand facial recognition checks for travelers arriving to and departing from the U.S. to also include citizens, which had previously been exempt from the mandatory checks.

In a filing, the department has proposed that all travelers, and not just foreign nationals or visitors, will have to complete a facial recognition check before they are allowed to enter the U.S., but also to leave the country.

Facial recognition for departing flights has increased in recent years as part of Homeland Security’s efforts to catch visitors and travelers who overstay their visas. The department, whose responsibility is to protect the border and control immigration, has a deadline of 2021 to roll out facial recognition scanners to the largest 20 airports in the United States, despite facing a rash of technical challenges.

But although there may not always be a clear way to opt-out of facial recognition at the airport, U.S. citizens and lawful permanent residents — also known as green card holders — have been exempt from these checks, the existing rules say.

Now, the proposed rule change to include citizens has drawn ire from one of the largest civil liberties groups in the country.

“Time and again, the government told the public and members of Congress that U.S. citizens would not be required to submit to this intrusive surveillance technology as a condition of traveling,” said Jay Stanley, a senior policy analyst at the American Civil Liberties Union .

“This new notice suggests that the government is reneging on what was already an insufficient promise,” he said.

“Travelers, including U.S. citizens, should not have to submit to invasive biometric scans simply as a condition of exercising their constitutional right to travel. The government’s insistence on hurtling forward with a large-scale deployment of this powerful surveillance technology raises profound privacy concerns,” he said.

Citing a data breach of close to 100,000 license plate and traveler images in June, as well as concerns about a lack of sufficient safeguards to protect the data, Stanley said the government “cannot be trusted” with this technology and that lawmakers should intervene.

Source: DHS wants to expand airport face recognition scans to include US citizens

Vulnerability in fully patched Android phones under active attack by bank thieves – watch out for permissions being asked from apps you have installed

A vulnerability in millions of fully patched Android phones is being actively exploited by malware that’s designed to drain the bank accounts of infected users, researchers said on Monday.

The vulnerability allows malicious apps to masquerade as legitimate apps that targets have already installed and come to trust, researchers from security firm Promon reported in a post. Running under the guise of trusted apps already installed, the malicious apps can then request permissions to carry out sensitive tasks, such as recording audio or video, taking photos, reading text messages or phishing login credentials. Targets who click yes to the request are then compromised.

Researchers with Lookout, a mobile security provider and a Promon partner, reported last week that they found 36 apps exploiting the spoofing vulnerability. The malicious apps included variants of the BankBot banking trojan. BankBot has been active since 2017, and apps from the malware family have been caught repeatedly infiltrating the Google Play Market.

The vulnerability is most serious in versions 6 through 10, which (according to Statista) account for about 80% of Android phones worldwide. Attacks against those versions allow malicious apps to ask for permissions while posing as legitimate apps. There’s no limit to the permissions these malicious apps can seek. Access to text messages, photos, the microphone, camera, and GPS are some of the permissions that are possible. A user’s only defense is to click “no” to the requests.

An affinity for multitasking

The vulnerability is found in a function known as TaskAffinity, a multitasking feature that allows apps to assume the identity of other apps or tasks running in the multitasking environment. Malicious apps can exploit this functionality by setting the TaskAffinity for one or more of its activities to match a package name of a trusted third-party app. By either combining the spoofed activity with an additional allowTaskReparenting activity or launching the malicious activity with an Intent.FLAG_ACTIVITY_NEW_TASK, the malicious apps will be placed inside and on top of the targeted task.

“Thus the malicious activity hijacks the target’s task,” Promon researchers wrote. “The next time the target app is launched from Launcher, the hijacked task will be brought to the front and the malicious activity will be visible. The malicious app then only needs to appear like the target app to successfully launch sophisticated attacks against the user. It is possible to hijack such a task before the target app has even been installed.”

Promon said Google has removed malicious apps from its Play Market, but, so far, the vulnerability appears to be unfixed in all versions of Android. Promon is calling the vulnerability “StrandHogg,” an old Norse term for the Viking tactic of raiding coastal areas to plunder and hold people for ransom. Neither Promon nor Lookout identified the names of the malicious apps. That omission makes it hard for people to know if they are or were infected.

[…]

Suspicious signs include:

  • An app or service that you’re already logged into is asking for a login.
  • Permission popups that don’t contain an app name.
  • Permissions asked from an app that shouldn’t require or need the permissions it asks for. For example, a calculator app asking for GPS permission.
  • Typos and mistakes in the user interface.
  • Buttons and links in the user interface that do nothing when clicked on.
  • Back button does not work as expected.

Source: Vulnerability in fully patched Android phones under active attack by bank thieves | Ars Technica

123Autoit – NonRoot trial – Apps on Google Play

***** No Root Required, ***** *****Please Look the Following***** ****However need to start a (Backend Service) Per every Boot ***** *****the Install package can be found at the following link***** http://123autoit.blogspot.tw/2016/08/123autoit-non-root-daemon-service.html Please update the backend service for Version 1.3 to use (Speed up mode) ***Daemon Script Install Video*** https://www.youtube.com/watch?v=awCz9A_FLk0 It is now supported both ARM and Intel Android Device If it is not support your phone or Install, setting , usage, any problem can reach me on E M A l L (kevinyiu82@gmail.com) or send me a hangout https://plus.google.com/+kevinyiu82 I am here to help Video Tutorial https://www.youtube.com/playlist?list=PLp0O8ko3Htr4YcZYXe2pyqG2lARTDqwoD Continue updating 123AutoIt (Automate repetitive tasks based on predefined logic) [BETA STAGE] Requirements -Android 5+ -best to run on safe mode -ram 1G + Features: match conditions trigger Taps, Swipes, pauses supported (Drag is still in beta stage, if experience any problem restart and try again using another mode) repeat number set to repeat actions accordingly validation at the point, to quick examine your check point placement add, select, edit and remove action from the logic different profile supported allow extra control to change the logic flow provide basic start and stop function (if more than one action within a page, then need to press a few more time to stop the process. +added extra options to disable auto rotate in screen capture (to handle for some device landscape screen capture problem) +added in app video tutorial + added FloatLayout to Control Panel + added Accumulated Count Click action + change name Counter Click to (Consecutive Counter Click) + Duplicate Image can’t show image bug fix +added setting storage location +added validation storage location +added magnifying glasses +added ads cache +added WiFi ON & OFF Action +softkeyboard input bug fix +UI minor adjustment +Update Edit Mode UI +Update Text Description +In Edit mode back press twice to get back full screen +fixed Recharge Button +Start up version check has been added +Edit mode z-index fixed +Fixed Repeat number can’t be saved issue(android 5.0+) +Fixed locale Issue +Added Same Page ? Times trigger Click Action +Added Action notificiation +Added Error notificiation +Added OCR checks Quick tips: -make sure your phone/tablet is fully charge and connected to a charger -fan the device, it sure produce a lot of heat -lower the backlight -turn on the developer mode to show the current click/swipe points -make sure turn off other background app except the app itself, and the targeted app. just to make app more stable ***Please notice, in some devices (such as Xiaomi) more action is needed for the application to work. such as allowing “pop up window” *** bug report: http://123autoit.blogspot.tw/2016/06/bug-report.html tutorial: http://123autoit.blogspot.tw/ ################################## OCR Using Open Source Tesseract library OpenCV Library ##################################

Source: 123Autoit – NonRoot trial – Apps on Google Play

For automating gaming clicks and anti-afk on Android

This ‘fix’ for economic theory changes everything from gambles to Ponzi schemes, because people adapt their risks wrt their wealth over time

Whether we decide to take out that insurance policy, buy Bitcoin, or switch jobs, many economic decisions boil down to a fundamental gamble about how to maximize our wealth over time. How we understand these decisions is the subject of a new perspective piece in Nature Physics that aims to correct a foundational mistake in economic theory.

According to author Ole Peters (London Mathematical Laboratory, Santa Fe Institute), people’s real-world behavior often “deviates starkly” from what standard would recommend.

Take the example of a simple coin toss: Most people would not gamble on a repeated coin toss where a heads would increase their by 50%, but a tails would decrease it by 40%.

“Would you accept the gamble and risk losing at the toss of a coin 40% of your house, car and life savings?” Peters asks, echoing a similar objection raised by Nicholas Bernoulli in 1713.

But early economists would have taken that gamble, at least in theory. In classical economics, the way to approach a decision is to consider all possible outcomes, then average across them. So the coin toss game seems worth playing because equal probability of a 50% gain and a 40% loss are no different from a 5% gain.

Why people don’t choose to play the game, seemingly ignoring the opportunity to gain a steady 5%, has been explained psychologically— people, in the parlance of the field, are “risk averse”. But according to Peters, these explanations don’t really get to the root of the problem, which is that the classical “solution” lacks a fundamental understanding of the individual’s unique trajectory over time.

Instead of averaging across parallel possibilities, Peters advocates an approach that models how an individual’s wealth evolves along a single path through time. In a disarmingly simple example, he randomly multiplies the player’s total wealth by either 150% or 60% depending on the coin toss. That player lives with the gain or loss of each round, carrying it with them to the next turn. As the play time increases, Peters’ model reveals an array of individual trajectories. They all follow unique paths. And in contrast to the classical conception, all paths eventually plummet downward. In other words, the approach reveals a fray of exponential losses where the classical conception would show a single exponential gain.

Encouragingly, people seem to intuitively grasp the difference between these two dynamics in empirical tests. The perspective piece describes an experiment conducted by a group of neuroscientists led by Oliver Hulme, at the Danish Research Center for Magnetic Resonance. Participants played a gambling game with real money. On one day, the game was set up to maximize their wealth under classical, additive dynamics. On a separate day, the game was set up under multiplicative dynamics.

“The crucial measure was whether participants would change their willingness to take risks between the two days,” explains the study’s lead author David Meder. “Such a change would be incompatible with classical theories, while Peters’ approach predicts exactly that.”

The results were striking: When the game’s dynamics changed, all of the subjects changed their willingness to take risks, and in doing so were able to approximate the optimal strategy for growing their individual wealth over time.

“The big news here is that we are much more adaptable than we thought we were,” Peters says. “Theseaspects of our behavior we thought were neurologically imprinted are actually quite flexible.”

“This theory is exciting because it offers an explanation for why particular risk-taking behaviors emerge, and how these behaviors should adapt to different circumstances. Based on this, we can derive novel predictions for what types of reward signals the brain should compute to optimize wealth over time” says Hulme.

Peters’ distinction between averaging possibilities and tracing individual trajectories can also inform a long list of economic puzzles— from the equity premium puzzle to measuring inequality to detecting Bernie Madoff’s Ponzi scheme.

“It may sound obvious to say that what matters to one’s wealth is how it evolves over time, not how it averages over many parallel states of the same individual,” writes Andrea Taroni in a companion Editorial in Nature Physics. “Yet that is the conceptual mistake we continue to make in our economic models.”

Source: This ‘fix’ for economic theory changes everything from gambles to Ponzi schemes

TrueDialog leaks tens of millions of US SMS messages and user data

Led by Noam Rotem and Ran Locar, vpnMentor’s research team discovered a breached database belonging to the American communications company, TrueDialog.

TrueDialog provides SMS texting solutions to companies in the USA and the database in question was linked to many aspects of their business. This was a huge discovery, with a massive amount of private data exposed, including tens of millions of SMS text messages.

Aside from private text messages, our team discovered millions of account usernames and passwords, PII data of TrueDialog users and their customers, and much more.

By not securing their database properly, TrueDialog compromised the security and privacy of millions of people across the USA.

[…]

Millions of email addresses, usernames, cleartext passwords, and base64 encoded passwords (which are easy to decrypt) were easily accessible within the database.

[…]

We were able to find tens of millions of entries from messages sent via TrueDialog and conversations hosted on the platform. The sensitive data contained in these SMS messages included, but was not limited to:

  • Full Names of recipients, TrueDialog account holders, & TrueDialog users
  • Content of messages
  • Email addresses
  • Phone numbers of recipients and users
  • Dates and times messages were sent
  • Status indicators on messages sent, like Read receipts, replies, etc.
  • TrueDialog account details

The data exposed was a mix of TrueDialog account holders, users, and tens of millions of American citizens.

[…]

There were hundreds of thousands of entries with details about users, including full names, phone numbers, addresses, emails and more.

Source: Report: Millions of Americans at Risk After Huge Data and SMS Leak

SMS Replacement is Exposing Users to Text, Call Interception Thanks to Sloppy Telecos

A standard used by phone carriers around the world can leave users open to all sorts of attacks, like text message and call interception, spoofed phone numbers, and leaking their coarse location, new research reveals.

The Rich Communication Services (RCS) standard is essentially the replacement for SMS. The news shows how even as carriers move onto more modern protocols for communication, phone network security continues to be an exposed area with multiple avenues for attack in some implementations of RCS.

“I’m surprised that large companies, like Vodafone, introduce a technology that exposes literally hundreds of millions of people, without asking them, without telling them,” Karsten Nohl from cybersecurity firm Security Research Labs (SRLabs) told Motherboard in a phone call.

SRLabs researchers Luca Melette and Sina Yazdanmehr will present their RCS findings at the upcoming Black Hat Europe conference in December, and discussed some of their work at security conference DeepSec on Friday.

RCS is a relatively new standard for carrier messaging and includes more features than SMS, such as photos, group chats, and file transfers. Back in 2015, Google announced it would be adopting RCS to move users away from SMS, and that it had acquired a company called Jibe Mobile to help with the transition. RCS essentially runs as an app on your phone that logs into a service with a username and password, Nohl explained.

SRLabs estimated RCS is already implemented by at least 100 mobile operators, with many of the deployments being in Europe. SRLabs said that all the major U.S. carriers—AT&T, T-Mobile, Sprint, and Verizon—were using RCS.

SRLabs didn’t find an issue in the RCS standard itself, but rather how it is being implemented by different telecos. Because some of the standard is undefined, there’s a good chance companies may deploy it in their own way and make mistakes.

“Everybody seems to get it wrong right now, but in different ways,” Nohl said. SRLabs took a sample of SIM cards from a variety of carriers and checked for RCS-related domains, and then looked into particular security issues with each. SRLabs didn’t say which issues impacted which particular telecos.

Some of those issues include how devices receive RCS configuration files. In one instance, a server provides the configuration file for the right device by identifying them by their IP address. But because they also use that IP address, “Any app that you install on your phone, even if you give it no permissions whatsoever, it can request this file. So now every app can get your username and password to all your text messages and all your voice calls. That’s unexpected,” Nohl said.

Source: SMS Replacement is Exposing Users to Text, Call Interception Thanks to Sloppy Telecos – VICE