Google Play Publisher account gets terminated – but Google won’t tell you why

Developer Patrick Godeau has claimed his business is under threat after his Google Play Publisher account was terminated without a specific reason given.

Godeau, from France, provides apps for iOS and Android via his company Tokata.

It is a small business but Godeau said in his complaint that he has achieved “millions of downloads”, most via the Play Store.

On 31 July, Godeau received an email stating that “your Google Play Publisher account has been terminated”. He appealed and was told that “we’re unable to reinstate your developer account”. The reason given was not specific, just that it was “due to multiple violations of the Developer Program Policies”.

[…]

In July 2018, Google removed another of his applications specifying “device and network abuse”. He never discovered what the issue was. Maybe he was using the YouTube API wrongly? “Having read though the API terms of service, I couldn’t deduce how my app infringed them,” he said. However, he was able to publish a new version.

The new issue is not so easily resolved. First one of his apps was suspended for what the Play team said is “malicious behaviour”. Shortly after, his entire account was terminated complete with the advice “please do not attempt to register a new developer account”.

Patrick Godeau informs customers that his apps have been removed from the Play Store

Patrick Godeau informs customers that his apps have been removed from the Play Store

The apps remain available on the Apple and Amazon app stores.

Godeau said he has no objection to Google’s efforts to remove malicious apps from the Play Store. His frustration is that he has not been told any specifics about what is wrong with his apps, and that there is no meaningful dialogue with the Play team or appeal against a decision that directly impacts his ability to make a living from software development.

“It seems that I’m not the only one in this situation,” he wrote. “Many Android developers have seen their apps removed and their accounts abruptly terminated by the Google Play bots, often for minor and unintentional reasons, or even for no known reason at all, and almost always without any opportunity to prove their good faith, receiving no other response than automatic messages.”

This kind of incident is apparently not uncommon. Another company, Guidebook, which develops apps for events, has also had its apps removed, leaving users taking to Twitter to ask where they are. Guidebook’s Twitter support says “we’re actively working with Google to rectify this.”

Bemused customers take to Twitter in search of Guidebook apps removed from the Play Store

Bemused customers take to Twitter in search of Guidebook apps removed from the Play Store

Another common complaint is that Google does too little to remove pirated or copycat applications from the Play Store, causing potential reputational problems for developers whose customers may get an ad-laden copy instead of the real thing, or simply loss of business to the pirates.

Source: So your Google Play Publisher account has been terminated – of course you would want to know why exactly • The Register

And this is one of the problems when you’re working with an unregulated massive monopoly who can basically dictate whatever arbritrary terms they like, whilst people’s incomes are suffering from them.

They need to be broken up!

<iframe width=”560″ height=”315″ src=”https://www.youtube.com/embed/RFA92mXjXLI” frameborder=”0″ allow=”accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture” allowfullscreen></iframe>

States to launch antitrust investigation into big tech companies, reports say

The state attorneys in more than a dozen states are preparing to begin an antitrust investigation of the tech giants, The Wall Street Journal and The New York Times reported Monday, putting the spotlight on an industry that is already facing federal scrutiny.

The bipartisan group of attorneys from as many as 20 states is expected to formally launch a probe as soon as next month to assess whether tech companies are using their dominant market position to hurt competition, the WSJ reported.

If true, the move follows the Department of Justice, which last month announced its own antitrust review of how online platforms scaled to their gigantic sizes and whether they are using their power to curb competition and stifle innovation. Earlier this year, the Federal Trade Commission formed a task force to monitor competition among tech platforms.

[…]

Because the tentacles of Google, Facebook, Amazon and Apple reach so many industries, any investigation into them could last for years.

Apple and Google pointed the Times to their previous official statements on the matter, in which they have argued that they have been vastly innovative and created an environment that has benefited the consumers. Amazon and Facebook did not comment.

Also on Monday, Joseph Simons, the chairman of FTC, warned that Facebook’s planned effort to integrate Instagram and WhatsApp could stymie any attempt by the agency to break up the social media giant.

Source: States to launch antitrust investigation into big tech companies, reports say | TechCrunch

And if you like, here is my talk about how exactly the tech giants are becoming monopolies and killing innovation, among many other things.

<iframe width=”560″ height=”315″ src=”https://www.youtube.com/embed/RFA92mXjXLI” frameborder=”0″ allow=”accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture” allowfullscreen></iframe>

If for some reason you want an Apple Card here’s How to Easily Opt Out of Binding Arbitration

You’ll spot binding arbitration clauses in a lot of financial agreements because it helps keep banks and their business partners out of court. If you agree to binding arbitration, you can’t go to trial against a company or join a class-action lawsuit; you can only have your issue settled by a third-party mediator. If you don’t like what the mediator decides, you still have to live with it.

Not all credit cards allow you to opt out of binding arbitration, but Apple Card does. And it makes it easy for you to opt out by allowing you to do so by text message. In fact, if you have any question about using Apple Card, you can get help via text message (instead of having to use your phone like an actual phone and wait on hold).

Nick Guy shared a screenshot on Twitter to illustrate just how easy it was to opt out of arbitration for his new Apple Card:

Take a minute now to send your opt-out request, then rest easy knowing that if you end up with major beef with Apple Card, you have access to all your options for dealing with it.

Source: How to Easily Opt Out of Apple Card Binding Arbitration

Man sued for using bogus YouTube takedowns to get address for swatting – so copyright is not only inane, it’s also physically dangerous

YouTube is suing a Nebraska man the company says has blatantly abused its copyright takedown process. The Digital Millennium Copyright Act offers online platforms like YouTube legal protections if they promptly take down content flagged by copyright holders. However, this process can be abused—and boy did defendant Christopher L. Brady abuse it, according to YouTube’s legal complaint (pdf).

Brady allegedly made fraudulent takedown notices against YouTube videos from at least three well-known Minecraft streamers. In one case, Brady made two false claims against a YouTuber and then sent the user an anonymous message demanding a payment of $150 by PayPal—or $75 in bitcoin.

“If you decide not to pay us, we will file a 3rd strike,” the message said. When a YouTube user receives a third copyright strike, the YouTuber’s account gets terminated.

A second target was ordered to pay $300 by PayPal or $200 in Bitcoin to avoid a third fraudulent copyright strike.

A third incident was arguably even more egregious. According to YouTube, Brady filed several fraudulent copyright notices against another YouTuber with whom he was “engaged in some sort of online dispute.” The YouTuber responded with a formal counter-notice stating that the content wasn’t infringing—a move that allows the content to be reinstated. However, the law requires the person filing the counter-notice to provide his or her real-world name and address—information that’s passed along to the person who filed the takedown request.

This contact information is supposed to enable a legitimate copyright holder to file an infringement lawsuit in court. But YouTube says Brady had another idea. A few days after filing a counter-notice, the targeted YouTuber “announced via Twitter that he had been the victim of a swatting scheme.” Swatting, YouTube notes, “is the act of making a bogus call to emergency services in an attempt to bring about the dispatch of a large number of armed police officers to a particular address.”

YouTube doesn’t provide hard proof that Brady was responsible for the swatting call, stating only that it “appears” he was responsible based on the sequence of events. But YouTube says it does have compelling evidence that Brady was responsible for the fraudulent takedown notices. And fraudulent takedown notices are themselves against the law.

Section 512(f) of the DMCA says that anyone who “knowingly materially misrepresents” that content is infringing in a takedown notice is liable for costs they impose on both accused infringers and platform owners. While this law has been on the books for more than 20 years, it has rarely been used because most misrepresentations have not been blatant enough to trigger legal liability.

For example, Ars covered the decade-long fight over a “dancing baby” video that happened to have a few seconds of Prince music playing in the background. The Electronic Frontier Foundation argued that the music was clearly allowed under copyright’s fair use doctrine—and that Universal Music should be held liable for submitting a takedown request anyway. A 2016 appeals court ruling made it clear that music labels had some obligation to consider fair use before issuing takedown requests, but the court set the bar so low that the targets of bogus takedowns have little hope of actually collecting damages.

Source: Man sued for using bogus YouTube takedowns to get address for swatting | Ars Technica

Data Breach in Adult Site Luscious Compromises Privacy of All Users

Luscious is a niche pornographic image site focused primarily on animated, user-uploaded content. Based on the research carried out by our team, the site has over 1 million registered users. Each user has a profile, the details of which could be accessed through our research.

Private profiles allow users to upload, share, comment on, and discuss content on Luscious. All of this is understandably done while keeping their identity hidden behind usernames.

The data breach our team discovered compromises this anonymity by potentially allowing hackers to access the personal details of users, including their personal email address. The highly sensitive and private nature of Luscious’ content makes users incredibly vulnerable to a range of attacks and exploitation by malicious hackers.

[…]

The private personal user details we viewed included:

  • Usernames
  • Personal email addresses
  • User activity logs (date joined, most recent log in)
  • Country of residence/location
  • Gender

Some users’ email addresses indicated their full names, increasing their vulnerability to exploitation and cybercrime.

It’s worth mentioning that we estimate 20% of emails on Luscious accounts use fake email addresses to sign up. This suggests that some Luscious users are actively taking extra steps to remain anonymous.

User Behaviours & Activities

The data breach also gave a complete overview of user activities. This allowed us to view things like:

  • The number of image albums they had created
  • Video uploads
  • Comments
  • Blog posts
  • Favorites
  • Followers and accounts followed
  • Their User ID number – so we can know if they’re active or have been banned

Source: Report: Data Breach in Adult Site Compromises Privacy of All Users

Ouch – if you were on there, good luck and change your details immediately!

facial recognition ‘epidemic’ across UK private sites in conjunction with the police

Facial recognition is being extensively deployed on privately owned sites across the UK, according to an investigation by civil liberties group Big Brother Watch.

It found an “epidemic” of the controversial technology across major property developers, shopping centres, museums, conference centres and casinos in the UK.

The investigation uncovered live facial recognition in Sheffield’s major shopping centre Meadowhall.

Site owner British Land said: “We do not operate facial recognition at any of our assets. However, over a year ago we conducted a short trial at Meadowhall, in conjunction with the police, and all data was deleted immediately after the trial.”

The investigation also revealed that Liverpool’s World Museum scanned visitors with facial recognition surveillance during its exhibition, “China’s First Emperor and the Terracotta Warriors” in 2018.

The museum’s operator, National Museums Liverpool, said this had been done because there had been a “heightened security risk” at the time. It said it had sought “advice from Merseyside Police and local counter-terrorism advisors” and that use of the technology “was clearly communicated in signage around the venue”.

A spokesperson added: “World Museum did not receive any complaints and it is no longer in use. Any use of similar technology in the future would be in accordance with National Museums Liverpool’s standard operating procedures and with good practice guidance issued by the Information Commissioner’s Office.”

Big Brother Watch said it also found the Millennium Point conference centre in Birmingham was using facial-recognition surveillance “at the request of law enforcement”. In the privacy policy on Millennium Point’s website, it confirms it does “sometimes use facial recognition software at the request of law enforcement authorities”. It has not responded to a request for further comment.

Earlier this week it emerged the privately owned Kings Cross estate in London was using facial recognition, and Canary Wharf is considering following suit.

Information Commissioner Elizabeth Denham has since launched an investigation, saying she remains “deeply concerned about the growing use of facial recognition technology in public spaces, not only by law enforcement agencies but also increasingly by the private sector”.

The Metropolitan Police’s use of the tech was recently slammed as highly inaccurate and “unlawful”, according to an independent report by researchers from the University of Essex.

Silkie Carlo, director of Big Brother Watch, said: “There is an epidemic of facial recognition in the UK.

“The collusion between police and private companies in building these surveillance nets around popular spaces is deeply disturbing. Facial recognition is the perfect tool of oppression and the widespread use we’ve found indicates we’re facing a privacy emergency.

“We now know that many millions of innocent people will have had their faces scanned with this surveillance without knowing about it, whether by police or by private companies.

“The idea of a British museum secretly scanning the faces of children visiting an exhibition on the first emperor of China is chilling. There is a dark irony that this authoritarian surveillance tool is rarely seen outside of China.”

Carlo urged Parliament to follow in the footsteps of legislators in the US and “ban this authoritarian surveillance from public spaces”. ®

Source: And you thought the cops were bad… Civil rights group warns of facial recog ‘epidemic’ across UK private sites • The Register

YouTube shuts down music companies’ use of manual copyright claims to steal creator revenue, troll block videos

Going forward, copyright owners will no longer be able to monetize creator videos with very short or unintentional uses of music via YouTube’s “Manual Claiming” tool. Instead, they can choose to prevent the other party from monetizing the video or they can block the content. However, YouTube expects that by removing the option to monetize these sorts of videos themselves, some copyright holders will instead just leave them alone.

“One concerning trend we’ve seen is aggressive manual claiming of very short music clips used in monetized videos. These claims can feel particularly unfair, as they transfer all revenue from the creator to the claimant, regardless of the amount of music claimed,” explained YouTube in a blog post.

To be clear, the changes only involve YouTube’s Manual Claiming tool which is not how the majority of copyright violations are handled today. Instead, the majority of claims are created through YouTube’s Content ID match system. This system scans videos uploaded to YouTube against a database of files submitted to the site by copyright owners. Then, when a match is found, the copyright holder owner can choose to block the video or monetize it themselves, and track the video’s viewership stats.

The Manual Claiming tool, on the other hand, is only offered to partners who understand how Content ID works. It allows them to search through publicly available YouTube videos to look for those containing their content and apply a claim when a match is found.

The problem with the Manual Claiming policy is that it was impacting creator content even when the use of the claimed music in videos was very short — even a second long — or unintentional. For example, a creator who was vlogging may have walked past a store that was playing the copyrighted song, but then could lose the revenue from their video as a result.

In April, YouTube said it was looking to address this problem. And just ahead of this year’s VidCon, YouTube announced several well-received changes to the Manual Claiming Policy. It began to require that copyright owners specify the timestamp in the video where the claim occurs — a change that YouTube hoped would create additional friction and cut down on abuse.

Creators were also given tools of their own that let them easily remove the clip or replace the infringing content with free-to-use tracks.

These newly announced changes go even further as they remove the ability for the copyright owner to monetize the infringing video at all. Copyright holders can now only prevent the creators themselves from monetizing the video, or they can block the content. However, given the new creator tools for handling infringing content, it’s likely that creators in those situations would just address the problem content in order to keep their video online.

Source: YouTube shuts down music companies’ use of manual copyright claims to steal creator revenue | TechCrunch

This piece shows you how insane the copyright system is (if you walk past a shop playing some music you can consider it an infringement!) and how the large music maffia can muscle out small players – just calling something an infringement leads to a kafka-esque system where you can’t appeal easily. It’s a good thing that this muscling is now no longer easy to do and so automated.

UPS Has Been Delivering Cargo in Self-Driving Trucks for Months (with 2 people on board)

The self-driving freight truck startup TuSimple has been carrying mail across the state of Arizona for several weeks.

UPS announced on Thursday that its venture capital arm has made a minority investment in TuSimple. The announcement also revealed that since May TuSimple autonomous trucks have been hauling UPS loads on a 115-mile route between Phoenix and Tucson.

UPS confirmed to Gizmodo this is the first time UPS has announced it has been using TuSimple autonomous trucks to deliver packages in the state.

Around the same time as the UPS and TuSimple program began, the United States Postal Service and TuSimple publicized a two-week pilot program to deliver mail between Phoenix and Dallas, a 1,000 mile trip.

TuSimple claims it can cut the average cost of shipping in a tractor-trailer by 30 percent. In an announcement about the new partnership, UPS Ventures managing partner, Todd Lewis, said the venture arm “collaborates with startups to explore new technologies and tailor them to help meet our specific needs.”

UPS would not share the terms of the deal with Gizmodo. TuSimple did not immediately respond to a request for comment.

As the Verge reports, TuSimple puts its own autonomous tech—which relies on nine cameras and two LIDAR sensors—in Navistar vehicles.

The partnership announcement states that TuSimple has been helping UPS understand how to get to Level 4 autonomous driving where a vehicle is fully autonomous and able to reach a particular location. At this point, the TuSimple trucks carrying packages for UPS still have an engineer and a safety driver riding along. When UPS reaches Level 4, it won’t need anyone behind the wheel.

Source: UPS Has Been Delivering Cargo in Self-Driving Trucks for Months And No One Knew

Researchers build a heat shield just 10 atoms thick to protect electronic devices

Excess heat given off by smartphones, laptops and other electronic devices can be annoying, but beyond that it contributes to malfunctions and, in extreme cases, can even cause lithium batteries to explode.

To guard against such ills, engineers often insert glass, plastic or even layers of air as insulation to prevent heat-generating components like microprocessors from causing damage or discomforting users.

Now, Stanford researchers have shown that a few layers of atomically , stacked like sheets of paper atop hot spots, can provide the same insulation as a sheet of glass 100 times thicker. In the near term, thinner heat shields will enable engineers to make even more compact than those we have today, said Eric Pop, professor of electrical engineering and senior author of a paper published Aug. 16 in Science Advances.

[…]

To make nanoscale heat shields practical, the researchers will have to find some mass production technique to spray or otherwise deposit atom-thin layers of materials onto electronic components during manufacturing. But behind the immediate goal of developing thinner insulators looms a larger ambition: Scientists hope to one day control the vibrational energy inside materials the way they now control electricity and light. As they come to understand the heat in solid objects as a form of sound, a new field of phononics is emerging, a name taken from the Greek root word behind telephone, phonograph and phonetics.

“As engineers, we know quite a lot about how to control electricity, and we’re getting better with light, but we’re just starting to understand how to manipulate the high-frequency sound that manifests itself as at the atomic scale,” Pop said.

Source: Researchers build a heat shield just 10 atoms thick to protect electronic devices

Google’s AI can be manipulated into “accidentally” deactivating targetted user accounts

Jordan B. Peterson had his gmail account deactivated and I had the opportunity to inspect the bug report as a full-time employee. What I found was that Google had a technical vulnerability that, when exploited, would take any gmail account down. Certain unknown 3rd party actors are aware of this secret vulnerability and exploit it. This is how it worked: Take a target email address, change exactly one letter in that email address, and then create a new account with that changed email address. Malicious actors repeated this process over and over again until a network of spoof accounts for Jordan B. Peterson existed. Then these spoof accounts started generating spam emails. These email-spam blasts caught the attention of an AI system which fixed the problem by deactivating the spam accounts… and then ALSO the original account belonging to Jordan B. Peterson!

Source: Open Letter: Dear Attorney Representing Tulsi Gabbard, this is how Google is “accidentally” deactivating user accounts | Minds

Google “open sources” LiveTranscribe – except not really: only gives away android coding examples to connect to Google’s cloud speech products

Live Transcribe is an Android application that provides real-time captioning for people who are deaf or hard of hearing. This repository contains the Android client libraries for communicating with Google’s Cloud Speech API that are used in Live Transcribe.

[…]
The libraries provided are nearly identical to those running in the production application Live Transcribe. They have been extensively field tested and unit tested. However, the tests themselves are not open sourced at this time.

Github: live-transcribe-speech-engine

This is part of the problem with big companies playing Open Source – it’s not giving away anything useful or of any value, it’s just showing you how to connect to a product you will have to pay for. But Google is playing this one up and pretending that it’s releasing something worthwhile. It’s a scam.

OMG Cable | Hackaday

The O.MG cable (or Offensive MG kit) from [MG] hides a backdoor inside the shell of a USB connector. Plug this cable into your computer and you’ll be the victim of remote attacks over WiFi.

You might be asking what’s inside this tiny USB cable to make it susceptible to such attacks. That’s the trick: inside the shell of the USB ‘A’ connector is a PCB loaded up with a WiFi microcontroller — the documentation doesn’t say which one — that will send payloads over the USB device. Think of it as a BadUSB device, like the USB Rubber Ducky from Hak5, but one that you can remote control. It is the ultimate way into a system, and all anyone has to do is plug a random USB cable into their computer.

In the years BadUSB — an exploit hidden in a device’s USB controller itself — was released upon the world, [MG] has been tirelessly working on making his own malicious USB device, and now it’s finally ready. The O.MG cable hides a backdoor inside the shell of a standard, off-the-shelf USB cable.

The construction of this device is quite impressive, in that it fits entirely inside a USB plug. But this isn’t a just a PCB from a random Chinese board house: [MG] spend 300 hours and $4000 in the last month putting this project together with a Bantam mill and created his own PCBs, with silk screen. That’s impressive no matter how you cut it.

Source: OMG Cable | Hackaday

http://mg.lol/blog/omg-cable/ The makers

Soft launch of the cable for USD 200

Google  Neural net can spot breast, prostate tumors through microscope

Google Health’s so-called augmented-reality microscope has proven surprisingly accurate at detecting and diagnosing cancerous tumors in real time.

The device is essentially a standard microscope decked out with two extra components: a camera, and a computer running AI software with an Nvidia Titan Xp GPU to accelerate the number crunching. The camera continuously snaps images of body tissue placed under microscope, and passes these images to a convolutional neural network on the computer to analyze. In return, the neural net spits out, in real time allegedly, a heatmap of the cells in the image, labeling areas that are benign and abnormal on the screen for doctors to inspect.

Google’s eggheads tried using the device to detect the presence of cancer in samples of breast and prostate cells. The algorithms had a performance score of 0.92 when detecting cancerous lymph nodes in breast cancer and 0.93 for prostate cancer, with one being a perfect score, so it’s not too bad for what they describe as a proof of concept.

Details of the microscope system have been described in a paper published in Nature this week. The training data for breast cancer was taken from here, and here for prostate cancer. Some of the training data was reserved for inference testing.

The device is a pretty challenging system to build: it requires a processing pipeline that can handle, on the fly, microscope snaps that are high resolution enough to capture details at the cellular level. The size of the images used in this experiment measure 5,120 × 5,120 pixels. That’s much larger than what’s typically used for today’s deep learning algorithms, which have millions of parameters and require billions of floating-point operations just to process images as big as 300 pixels by 300 pixels.

Source: It’s official – Google AI gives you cancer …diagnosis in real time: Neural net can spot breast, prostate tumors • The Register

Scientists Say They’ve Found a New Organ in Skin That Processes Pain

Typically, it’s thought that we perceive harmful sensations on our skin entirely through the very sensitive endings of certain nerve cells. These nerve cells aren’t coated by a protective layer of myelin, as other types are. Nerve cells are kept alive by and connected to other cells called glia; outside of the central nervous system, one of the two major types of glia are called Schwann cells.

An illustration of nociceptive Schwann cells
Illustration: Abdo, et al (Science)

The authors of the new study, published Thursday in Science, say they were studying these helper cells near the skin’s surface in the lab when they came across something strange—some of the Schwann cells seemed to form an extensive “mesh-like network” with their nerve cells, differently than how they interact with nerve cells elsewhere. When they ran further experiments with mice, they found evidence that these Schwann cells play a direct, added role in pain perception, or nociception.

One experiment, for instance, involved breeding mice with these cells in their paws that could be activated when the mice were exposed to light. Once the light came on, the mice seemed to behave like they were in pain, such as by licking themselves or guarding their paws. Later experiments found that these cells—since dubbed nociceptive Schwann cells by the team—respond to mechanical pain, like being pricked or hit by something, but not to cold or heat.

Because these cells are spread throughout the skin as an intricately connected system, the authors argue that the system should be considered an organ.

“Our study shows that sensitivity to pain does not occur only in the skin’s nerve [fibers], but also in this recently discovered pain-sensitive organ,” said senior study author Patrik Ernfors, a pain researcher at Sweden’s Karolinska Institute, in a release from the university.

Source: Scientists Say They’ve Found a New Organ in Skin That Processes Pain

Cut off your fingers: Data Breach in Biometric Security Platform Affecting Millions of Users over thousands of countries – yes unencrypted and yes, editable

Led by internet privacy researchers Noam Rotem and Ran Locar, vpnMentor’s team recently discovered a huge data breach in security platform BioStar 2.  

BioStar 2 is a web-based biometric security smart lock platform. A centralized application, it allows admins to control access to secure areas of facilities, manage user permissions, integrate with 3rd party security apps, and record activity logs.

As part of the biometric software, BioStar 2 uses facial recognition and fingerprinting technology to identify users.

The app is built by Suprema, one of the world’s top 50 security manufacturers, with the highest market share in biometric access control in the EMEA region. Suprema recently partnered with Nedap to integrate BioStar 2 into their AEOS access control system.

AEOS is used by over 5,700 organizations in 83 countries, including some of the biggest multinational businesses, many small local businesses, governments, banks, and even the UK Metropolitan Police.

The data leaked in the breach is of a highly sensitive nature. It includes detailed personal information of employees and unencrypted usernames and passwords, giving hackers access to user accounts and permissions at facilities using BioStar 2. Malicious agents could use this to hack into secure facilities and manipulate their security protocols for criminal activities. 

This is a huge leak that endangers both the businesses and organizations involved, as well as their employees. Our team was able to access over 1 million fingerprint records, as well as facial recognition information. Combined with the personal details, usernames, and passwords, the potential for criminal activity and fraud is massive. 

Once stolen, fingerprint and facial recognition information cannot be retrieved. An individual will potentially be affected for the rest of their lives.

[…]

Our team was able to access over 27.8 million records, a total of 23 gigabytes of data, which included the following information:

  • Access to client admin panels, dashboards, back end controls, and permissions
  • Fingerprint data
  • Facial recognition information and images of users
  • Unencrypted usernames, passwords, and user IDs
  • Records of entry and exit to secure areas
  • Employee records including start dates
  • Employee security levels and clearances
  • Personal details, including employee home address and emails
  • Businesses’ employee structures and hierarchies
  • Mobile device and OS information

[…]

With this leak, criminal hackers have complete access to admin accounts on BioStar 2. They can use this to take over a high-level account with complete user permissions and security clearances, and make changes to the security settings in an entire network. 

Not only can they change user permissions and lock people out of certain areas, but they can also create new user accounts – complete with facial recognition and fingerprints – to give themselves access to secure areas within a building or facility.

Furthermore, hackers can change the fingerprints of existing accounts to their own and hijack a user account to access restricted areas undetected. Hackers and other criminals could potentially create libraries of fingerprints to be used any time they want to enter somewhere without being detected.

This provides a hacker and their team open access to all restricted areas protected with BioStar 2. They also have access to activity logs, so they can delete or alter the data to hide their activities.

As a result, a hacked building’s entire security infrastructure becomes useless. Anybody with this data will have free movement to go anywhere they choose, undetected.

Source: Report: Data Breach in Biometric Security Platform Affecting Millions of Users

And there’s why biometrics are a poor choice in identification – you can’t change your fingertips, but you can edit the records. Using this data it should be fairly easy to print out fingerprints, if you can’t feel bothered to edit the database either.

Also Facebook Admits Yes, It Was Listening To Your Private Conversations via Messenger

“Much like Apple and Google, we paused human review of audio more than a week ago,” Facebook told Bloomberg on Tuesday.

The social media giant said that users could choose the option to have their voice chats on Facebook’s Messenger app transcribed. The contractors were testing artificial intelligence technology to make sure the messages were properly transcribed from voice to text.

Facebook has previously said that they are reading your messages on its Messenger App. Last year, Facebook CEO Mark Zuckerberg said that when “sensational messages” are found, “We stop those messages from going through.”

Zuckerberg also told Bloomberg last year that while conversations in the Messenger app are considered private, Facebook “scans them and uses the same tools to prevent abuse there that it does on the social network more generally.”

Source: Facebook Admits It Was Also Listening To Your Private Conversations | Digital Trends

 

Amazon, Google, Apple, Facebook – the five riders of the apocalypse are almost complete!

Ring Promised Swag to Users Who Narc on Their Neighbors

On top of turning their doorbell video feeds into a police surveillance network, Amazon’s home security subsidiary, Ring, also once tried to entice people with swag bags to snitch on their neighbors, Motherboard reported Friday.

The instructions are purportedly all laid out in a 2017 company presentation the publication obtained. Entitled “Digital Neighborhood Watch,” the slideshow apparently promised promo codes for Ring merch and other unspecified “swag” for those who formed watch groups, reported suspicious activity to the police, and raved about the device on social media. What qualifies as suspicious activity, you ask? According to the presentation, “strange vans and cars,” “people posing as utility workers,” and other dastardly deeds such as strolling down the street or peeping in car windows.

The slideshow goes on to outline monthly milestones for the group such as “Convert 10 new users” or ‘Solve a crime.” Meeting these goals would net the informant tiered Ring perks as if directing police scrutiny was a rewards program and not an act that can threaten people’s lives, particularly people of color.

These teams would have a “Neighborhood Manager,” a.k.a. a Ring employee, to help talk them through how to share their Ring footage with local officers. The presentation stated that if one of these groups of amateur sleuths succeeded in helping police solve a crime, each member would receive $50 off their next Ring purchase.

When asked about the presentation, a Ring spokesperson told Motherboard the program debuted before Amazon bought the company for a cool $1 billion last year. According to Motherboard, they also said it didn’t run for long:

“This particular idea was not rolled out widely and was discontinued in 2017. We will continue to invent, iterate, and innovate on behalf of our neighbors while aligning with our three pillars of customer privacy, security, and user control. Some of these ideas become official programs, and many others never make it past the testing phase.”

While Ring did eventually launch a neighborhood watch app, it doesn’t offer the same incentives this 2017 program promised, so choosing to narc on your neighbor won’t win you any $50 off coupons.

Ring has been the subject of mounting privacy concerns after reports from earlier this year revealed the company may have accidentally let its employees snoop on customers among other customer complaints. Earlier this week, the company also stated that it has partnerships with “over 225 law enforcement agencies,” in part to help cops figure out how to get their hands on users’ surveillance footage.

Source: Ring Promised Swag to Users Who Narc on Their Neighbors

This is just evil

Researchers accurately measure blood pressure using phone camera

A study led by University of Toronto researchers, published today in the American Heart Association journal Circulation: Cardiovascular Imaging, found that blood pressure can be measured accurately by taking a quick video selfie.

Kang Lee, a professor of applied psychology and human development at the Ontario Institute for Studies in Education and Canada Research Chair in developmental neuroscience, was the lead author of the study, working alongside researchers from the Faculty of Medicine’s department of physiology, and from Hangzhou Normal University and Zhejiang Normal University in China.

Using a technology co-discovered by Lee and his postdoctoral researcher Paul Zheng called transdermal optical imaging, researchers measured the blood pressure of 1,328 Canadian and Chinese adults by capturing two-minute videos of their faces on an iPhone. Results were compared to standard devices used to measure blood pressure.

The researchers found they were able to measure three types of blood pressure with 95 to 96 per cent accuracy.

[…]

Transdermal optical imaging works by capitalizing on the translucent nature of facial skin. When the light reaches the face, it penetrates the skin and reaches hemoglobin underneath it, which is red. This technology uses the optical sensor on a smartphone to capture the reflected red light from hemoglobin, which allows the technology to visualize and measure blood flow changes under the skin.

“From the video captured by the technology, you can see how the blood flows in different parts of the face and through this ebb and flow of blood in the face, you can get a lot of information,” says Lee.

He understood that the transdermal optical imaging technology had significant practical implications, so, with the help of U of T and MaRS, he formed a startup company called Nuralogix alongside entrepreneur Marzio Pozzuoli, who is now the CEO.

[…]

Nuralogix has developed a smartphone app called Anura that allows people to try out the transdermal optical imaging software for themselves. In the publicly available version of the app, people can record a 30-second video of their face and will receive measurements for stress levels and resting heart rate. In the fall, the company will release a version of the app in China that includes blood pressure measurements.

Lee says there is more research to be done to ensure that health measurements using transdermal optical imaging are as accurate as possible. In the recent study, for example, only people with regular or slightly higher blood pressure were measured. The study sample also did not have people with very dark or very fair skin. More diverse research subjects will make measurements more accurate, says Lee, but there are challenges when looking for people with very high and low blood pressure.

“In order to improve our app to make it usable, particularly for people with hypertension, we need to collect a lot of data from them, which is very, very hard because a lot of them are already taking medicine,” says Lee. “Ethically, we cannot tell them not to take medicine, but from time to time, we get participants who do not take medicine so we can get hypertensive and hypotensive people this way.”

While there are a wide range of applications for transdermal optical imaging technology, Lee says data privacy is of utmost concern. He says when a person uses the software by recording a video of their face, only the results are uploaded to the cloud but the video is not.

“We only extract blood flow information from your face and send that to the cloud. So from the cloud, if I look at your blood flow, I couldn’t tell it is you,” he says.

[…]

The research team also hopes to expand the capabilities of the technology to measure other health markers, including blood-glucose levels, hemoglobin and cholesterol.

Nuralogix plans on monetizing the technology by making an app that allows consumers to pay a low monthly fee to access more detailed health data. They are also licensing the technology through a product called DeepAffex, a cloud-based AI engine that can be used by businesses who are interested in the transdermal optical imaging technology in a range of industries from health care to security.

Source: Preventative health at your fingertips: U of T researchers accurately measure blood pressure using phone camera

Talk about unintended consequences: GDPR is an identity thief’s dream ticket to Europeans’ data

In a presentation at the Black Hat security conference in Las Vegas James Pavur, a PhD student at Oxford University who usually specialises in satellite hacking, explained how he was able to game the GDPR system to get all kinds of useful information on his fiancée, including credit card and social security numbers, passwords, and even her mother’s maiden name.

[…]

For social engineering purposes, GDPR has a number of real benefits, Pavur said. Firstly, companies only have a month to reply to requests and face fines of up to 4 per cent of revenues if they don’t comply, so fear of failure and time are strong motivating factors.

In addition, the type of people who handle GDPR requests are usually admin or legal staff, not security people used to social engineering tactics. This makes information gathering much easier.

Over the space of two months Pavur sent out 150 GDPR requests in his fiancée’s name, asking for all and any data on her. In all, 72 per cent of companies replied back, and 83 companies said that they had information on her.

Interestingly, five per cent of responses, mainly from large US companies, said that they weren’t liable to GDPR rules. They may be in for a rude shock if they have a meaningful presence in the EU and come before the courts.

Of the responses, 24 per cent simply accepted an email address and phone number as proof of identity and sent over any files they had on his fiancée. A further 16 per cent requested easily forged ID information and 3 per cent took the rather extreme step of simply deleting her accounts.

A lot of companies asked for her account login details as proof of identity, which is actually a pretty good idea, Pavur opined. But when one gaming company tried it, he simply said he’d forgotten the login and they sent it anyway.

The range of information the companies sent in is disturbing. An educational software company sent Pavur his fiancée’s social security number, date of birth and her mother’s maiden name. Another firm sent over 10 digits of her credit card number, the expiration date, card type and her postcode.

A threat intelligence company – not Have I been Pwned – sent over a list of her email addresses and passwords which had already been compromised in attacks. Several of these still worked on some accounts – Pavur said he has now set her up with a password manager to avoid repetition of this.

“An organisation she had never heard of, and never interacted with, had some of the most sensitive data about her,” he said. “GDPR provided a pretext for anyone in the world to collect that information.”

Fixing this issue is going to take action from both legislators and companies, Pavur said.

First off, lawmakers need to set a standard for what is a legitimate form of ID for GDPR requests. One rail company was happy to send out personal information, accepting a used envelope addressed to the fiancée as proof of identity.

Source: Talk about unintended consequences: GDPR is an identity thief’s dream ticket to Europeans’ data • The Register

Researchers Bypass Apple FaceID Using glasses to fool liveness detection

Researchers on Wednesday during Black Hat USA 2019 demonstrated an attack that allowed them to bypass a victim’s FaceID and log into their phone simply by putting a pair of modified glasses on their face. By merely placing tape carefully over the lenses of a pair glasses and placing them on the victim’s face the researchers demonstrated how they could bypass Apple’s FaceID in a specific scenario. The attack itself is difficult, given the bad actor would need to figure out how to put the glasses on an unconscious victim without waking them up.

To launch the attack, researchers with Tencent tapped into a feature behind biometrics called “liveness” detection, which is part of the biometric authentication process that sifts through “real” versus “fake” features on people.

[…]

Researchers specifically honed in on how liveness detection scans a user’s eyes. They discovered that the abstraction of the eye for liveness detection renders a black area (the eye) with a white point on it (the iris). And, they discovered that if a user is wearing glasses, the way that liveness detection scans the eyes changes.

“After our research we found weak points in FaceID… it allows users to unlock while wearing glasses… if you are wearing glasses, it won’t extract 3D information from the eye area when it recognizes the glasses.”

Putting these two factors together, researchers created a prototype of glasses – dubbed “X-glasses” – with black tape on the lenses, and white tape inside the black tape. Using this trick they were then able to unlock a victim’s mobile phone and then transfer his money through mobile payment App by placing the tape-attached glasses above the sleeping victim’s face to bypass the attention detection mechanism of both FaceID and other similar technologies.

The attack comes with obvious drawbacks – the victim must be unconscious, for one, and can’t wake up when the glasses are placed on their face.

Source: Researchers Bypass Apple FaceID Using Biometrics ‘Achilles Heel’ | Threatpost

Deep links to opt-out of data sharing by 60+ companies – Simple Opt Out

Simple Opt Out is drawing attention to opt-out data sharing and marketing practices that many people aren’t aware of (and most people don’t want), then making it easier to opt out. For example:

  • Target “may share your personal information with other companies which are not part of Target.”
  • Chase may share your “account balances and transaction history … For nonaffiliates to market to you.”
  • Crate & Barrel may share “your customer information [name, postal address and email address, and transactions you conduct on our Website or offline] with other select companies.”

This site makes it easier to opt out of data sharing by 50+ companies (or add a company, or see opt-out tips). Enjoy!

Source: Deep links to opt-out of data sharing by 60+ companies – Simple Opt Out

Mysterious, Ancient Radio Signals Keep Pelting Earth. Astronomers Designed an AI to Hunt Them Down..

Sudden shrieks of radio waves from deep space keep slamming into radio telescopes on Earth, spattering those instruments’ detectors with confusing data. And now, astronomers are using artificial intelligence to pinpoint the source of the shrieks, in the hope of explaining what’s sending them to Earth from — researchers suspect — billions of light-years across space.

Usually, these weird, unexplained signals are detected only after the fact, when astronomers notice out-of-place spikes in their data — sometimes years after the incident. The signals have complex, mysterious structures, patterns of peaks and valleys in radio waves that play out in just milliseconds. That’s not the sort of signal astronomers expect to come from a simple explosion, or any other one of the standard events known to scatter spikes of electromagnetic energy across space. Astronomers call these strange signals fast radio bursts (FRBs). Ever since the first one was uncovered in 2007, using data recorded in 2001, there’s been an ongoing effort to pin down their source. But FRBs arrive at random times and places, and existing human technology and observation methods aren’t well-primed to spot these signals.

Now, in a paper published July 4 in the journal Monthly Notices of the Royal Astronomical Society, a team of astronomers wrote that they managed to detect five FRBs in real time using a single radio telescope. [The 12 Strangest Objects in the Universe]

Wael Farah, a doctoral student at Swinburne University of Technology in Melbourne, Australia, developed a machine-learning system that recognized the signatures of FRBs as they arrived at the University of Sydney’s Molonglo Radio Observatory, near Canberra. As Live Science has previously reported, many scientific instruments, including radio telescopes, produce more data per second than they can reasonably store. So they don’t record anything in the finest detail except their most interesting observations.

Farah’s system trained the Molonglo telescope to spot FRBs and switch over to its most detailed recording mode, producing the finest records of FRBs yet.

Based on their data, the researchers predicted that between 59 and 157 theoretically detectable FRBs splash across our skies every day. The scientists also used the immediate detections to hunt for related flares in data from X-ray, optical and other radio telescopes — in hopes of finding some visible event linked to the FRBs — but had no luck.

Their research showed, however, that one of the most peculiar (and frustrating, for research purposes) traits of FRBs appears to be real: The signals, once arriving, never repeat themselves. Each one appears to be a singular event in space that will never happen again.

Source: Mysterious, Ancient Radio Signals Keep Pelting Earth. Astronomers Designed an AI to Hunt Them Down. | Live Science

Apple Is Locking iPhone Batteries to Discourage Repair, showing ominous errors if you replace your battery

By activating a dormant software lock on their newest iPhones, Apple is effectively announcing a drastic new policy: only Apple batteries can go in iPhones, and only they can install them.

If you replace the battery in the newest iPhones, a message indicating you need to service your battery appears in Settings > Battery, next to Battery Health. The “Service” message is normally an indication that the battery is degraded and needs to be replaced. The message still shows up when you put in a brand new battery, however. Here’s the bigger problem: our lab tests confirmed that even when you swap in a genuine Apple battery, the phone will still display the “Service” message.

It’s not a bug; it’s a feature Apple wants. Unless an Apple Genius or an Apple Authorized Service Provider authenticates a battery to the phone, that phone will never show its battery health and always report a vague, ominous problem.

Source: Apple Is Locking iPhone Batteries to Discourage Repair – iFixit