YouTube shuts down music companies’ use of manual copyright claims to steal creator revenue, troll block videos

Going forward, copyright owners will no longer be able to monetize creator videos with very short or unintentional uses of music via YouTube’s “Manual Claiming” tool. Instead, they can choose to prevent the other party from monetizing the video or they can block the content. However, YouTube expects that by removing the option to monetize these sorts of videos themselves, some copyright holders will instead just leave them alone.

“One concerning trend we’ve seen is aggressive manual claiming of very short music clips used in monetized videos. These claims can feel particularly unfair, as they transfer all revenue from the creator to the claimant, regardless of the amount of music claimed,” explained YouTube in a blog post.

To be clear, the changes only involve YouTube’s Manual Claiming tool which is not how the majority of copyright violations are handled today. Instead, the majority of claims are created through YouTube’s Content ID match system. This system scans videos uploaded to YouTube against a database of files submitted to the site by copyright owners. Then, when a match is found, the copyright holder owner can choose to block the video or monetize it themselves, and track the video’s viewership stats.

The Manual Claiming tool, on the other hand, is only offered to partners who understand how Content ID works. It allows them to search through publicly available YouTube videos to look for those containing their content and apply a claim when a match is found.

The problem with the Manual Claiming policy is that it was impacting creator content even when the use of the claimed music in videos was very short — even a second long — or unintentional. For example, a creator who was vlogging may have walked past a store that was playing the copyrighted song, but then could lose the revenue from their video as a result.

In April, YouTube said it was looking to address this problem. And just ahead of this year’s VidCon, YouTube announced several well-received changes to the Manual Claiming Policy. It began to require that copyright owners specify the timestamp in the video where the claim occurs — a change that YouTube hoped would create additional friction and cut down on abuse.

Creators were also given tools of their own that let them easily remove the clip or replace the infringing content with free-to-use tracks.

These newly announced changes go even further as they remove the ability for the copyright owner to monetize the infringing video at all. Copyright holders can now only prevent the creators themselves from monetizing the video, or they can block the content. However, given the new creator tools for handling infringing content, it’s likely that creators in those situations would just address the problem content in order to keep their video online.

Source: YouTube shuts down music companies’ use of manual copyright claims to steal creator revenue | TechCrunch

This piece shows you how insane the copyright system is (if you walk past a shop playing some music you can consider it an infringement!) and how the large music maffia can muscle out small players – just calling something an infringement leads to a kafka-esque system where you can’t appeal easily. It’s a good thing that this muscling is now no longer easy to do and so automated.

UPS Has Been Delivering Cargo in Self-Driving Trucks for Months (with 2 people on board)

The self-driving freight truck startup TuSimple has been carrying mail across the state of Arizona for several weeks.

UPS announced on Thursday that its venture capital arm has made a minority investment in TuSimple. The announcement also revealed that since May TuSimple autonomous trucks have been hauling UPS loads on a 115-mile route between Phoenix and Tucson.

UPS confirmed to Gizmodo this is the first time UPS has announced it has been using TuSimple autonomous trucks to deliver packages in the state.

Around the same time as the UPS and TuSimple program began, the United States Postal Service and TuSimple publicized a two-week pilot program to deliver mail between Phoenix and Dallas, a 1,000 mile trip.

TuSimple claims it can cut the average cost of shipping in a tractor-trailer by 30 percent. In an announcement about the new partnership, UPS Ventures managing partner, Todd Lewis, said the venture arm “collaborates with startups to explore new technologies and tailor them to help meet our specific needs.”

UPS would not share the terms of the deal with Gizmodo. TuSimple did not immediately respond to a request for comment.

As the Verge reports, TuSimple puts its own autonomous tech—which relies on nine cameras and two LIDAR sensors—in Navistar vehicles.

The partnership announcement states that TuSimple has been helping UPS understand how to get to Level 4 autonomous driving where a vehicle is fully autonomous and able to reach a particular location. At this point, the TuSimple trucks carrying packages for UPS still have an engineer and a safety driver riding along. When UPS reaches Level 4, it won’t need anyone behind the wheel.

Source: UPS Has Been Delivering Cargo in Self-Driving Trucks for Months And No One Knew

Researchers build a heat shield just 10 atoms thick to protect electronic devices

Excess heat given off by smartphones, laptops and other electronic devices can be annoying, but beyond that it contributes to malfunctions and, in extreme cases, can even cause lithium batteries to explode.

To guard against such ills, engineers often insert glass, plastic or even layers of air as insulation to prevent heat-generating components like microprocessors from causing damage or discomforting users.

Now, Stanford researchers have shown that a few layers of atomically , stacked like sheets of paper atop hot spots, can provide the same insulation as a sheet of glass 100 times thicker. In the near term, thinner heat shields will enable engineers to make even more compact than those we have today, said Eric Pop, professor of electrical engineering and senior author of a paper published Aug. 16 in Science Advances.

[…]

To make nanoscale heat shields practical, the researchers will have to find some mass production technique to spray or otherwise deposit atom-thin layers of materials onto electronic components during manufacturing. But behind the immediate goal of developing thinner insulators looms a larger ambition: Scientists hope to one day control the vibrational energy inside materials the way they now control electricity and light. As they come to understand the heat in solid objects as a form of sound, a new field of phononics is emerging, a name taken from the Greek root word behind telephone, phonograph and phonetics.

“As engineers, we know quite a lot about how to control electricity, and we’re getting better with light, but we’re just starting to understand how to manipulate the high-frequency sound that manifests itself as at the atomic scale,” Pop said.

Source: Researchers build a heat shield just 10 atoms thick to protect electronic devices

Google’s AI can be manipulated into “accidentally” deactivating targetted user accounts

Jordan B. Peterson had his gmail account deactivated and I had the opportunity to inspect the bug report as a full-time employee. What I found was that Google had a technical vulnerability that, when exploited, would take any gmail account down. Certain unknown 3rd party actors are aware of this secret vulnerability and exploit it. This is how it worked: Take a target email address, change exactly one letter in that email address, and then create a new account with that changed email address. Malicious actors repeated this process over and over again until a network of spoof accounts for Jordan B. Peterson existed. Then these spoof accounts started generating spam emails. These email-spam blasts caught the attention of an AI system which fixed the problem by deactivating the spam accounts… and then ALSO the original account belonging to Jordan B. Peterson!

Source: Open Letter: Dear Attorney Representing Tulsi Gabbard, this is how Google is “accidentally” deactivating user accounts | Minds

Google “open sources” LiveTranscribe – except not really: only gives away android coding examples to connect to Google’s cloud speech products

Live Transcribe is an Android application that provides real-time captioning for people who are deaf or hard of hearing. This repository contains the Android client libraries for communicating with Google’s Cloud Speech API that are used in Live Transcribe.

[…]
The libraries provided are nearly identical to those running in the production application Live Transcribe. They have been extensively field tested and unit tested. However, the tests themselves are not open sourced at this time.

Github: live-transcribe-speech-engine

This is part of the problem with big companies playing Open Source – it’s not giving away anything useful or of any value, it’s just showing you how to connect to a product you will have to pay for. But Google is playing this one up and pretending that it’s releasing something worthwhile. It’s a scam.

OMG Cable | Hackaday

The O.MG cable (or Offensive MG kit) from [MG] hides a backdoor inside the shell of a USB connector. Plug this cable into your computer and you’ll be the victim of remote attacks over WiFi.

You might be asking what’s inside this tiny USB cable to make it susceptible to such attacks. That’s the trick: inside the shell of the USB ‘A’ connector is a PCB loaded up with a WiFi microcontroller — the documentation doesn’t say which one — that will send payloads over the USB device. Think of it as a BadUSB device, like the USB Rubber Ducky from Hak5, but one that you can remote control. It is the ultimate way into a system, and all anyone has to do is plug a random USB cable into their computer.

In the years BadUSB — an exploit hidden in a device’s USB controller itself — was released upon the world, [MG] has been tirelessly working on making his own malicious USB device, and now it’s finally ready. The O.MG cable hides a backdoor inside the shell of a standard, off-the-shelf USB cable.

The construction of this device is quite impressive, in that it fits entirely inside a USB plug. But this isn’t a just a PCB from a random Chinese board house: [MG] spend 300 hours and $4000 in the last month putting this project together with a Bantam mill and created his own PCBs, with silk screen. That’s impressive no matter how you cut it.

Source: OMG Cable | Hackaday

http://mg.lol/blog/omg-cable/ The makers

Soft launch of the cable for USD 200

Google  Neural net can spot breast, prostate tumors through microscope

Google Health’s so-called augmented-reality microscope has proven surprisingly accurate at detecting and diagnosing cancerous tumors in real time.

The device is essentially a standard microscope decked out with two extra components: a camera, and a computer running AI software with an Nvidia Titan Xp GPU to accelerate the number crunching. The camera continuously snaps images of body tissue placed under microscope, and passes these images to a convolutional neural network on the computer to analyze. In return, the neural net spits out, in real time allegedly, a heatmap of the cells in the image, labeling areas that are benign and abnormal on the screen for doctors to inspect.

Google’s eggheads tried using the device to detect the presence of cancer in samples of breast and prostate cells. The algorithms had a performance score of 0.92 when detecting cancerous lymph nodes in breast cancer and 0.93 for prostate cancer, with one being a perfect score, so it’s not too bad for what they describe as a proof of concept.

Details of the microscope system have been described in a paper published in Nature this week. The training data for breast cancer was taken from here, and here for prostate cancer. Some of the training data was reserved for inference testing.

The device is a pretty challenging system to build: it requires a processing pipeline that can handle, on the fly, microscope snaps that are high resolution enough to capture details at the cellular level. The size of the images used in this experiment measure 5,120 × 5,120 pixels. That’s much larger than what’s typically used for today’s deep learning algorithms, which have millions of parameters and require billions of floating-point operations just to process images as big as 300 pixels by 300 pixels.

Source: It’s official – Google AI gives you cancer …diagnosis in real time: Neural net can spot breast, prostate tumors • The Register

Scientists Say They’ve Found a New Organ in Skin That Processes Pain

Typically, it’s thought that we perceive harmful sensations on our skin entirely through the very sensitive endings of certain nerve cells. These nerve cells aren’t coated by a protective layer of myelin, as other types are. Nerve cells are kept alive by and connected to other cells called glia; outside of the central nervous system, one of the two major types of glia are called Schwann cells.

An illustration of nociceptive Schwann cells
Illustration: Abdo, et al (Science)

The authors of the new study, published Thursday in Science, say they were studying these helper cells near the skin’s surface in the lab when they came across something strange—some of the Schwann cells seemed to form an extensive “mesh-like network” with their nerve cells, differently than how they interact with nerve cells elsewhere. When they ran further experiments with mice, they found evidence that these Schwann cells play a direct, added role in pain perception, or nociception.

One experiment, for instance, involved breeding mice with these cells in their paws that could be activated when the mice were exposed to light. Once the light came on, the mice seemed to behave like they were in pain, such as by licking themselves or guarding their paws. Later experiments found that these cells—since dubbed nociceptive Schwann cells by the team—respond to mechanical pain, like being pricked or hit by something, but not to cold or heat.

Because these cells are spread throughout the skin as an intricately connected system, the authors argue that the system should be considered an organ.

“Our study shows that sensitivity to pain does not occur only in the skin’s nerve [fibers], but also in this recently discovered pain-sensitive organ,” said senior study author Patrik Ernfors, a pain researcher at Sweden’s Karolinska Institute, in a release from the university.

Source: Scientists Say They’ve Found a New Organ in Skin That Processes Pain

Cut off your fingers: Data Breach in Biometric Security Platform Affecting Millions of Users over thousands of countries – yes unencrypted and yes, editable

Led by internet privacy researchers Noam Rotem and Ran Locar, vpnMentor’s team recently discovered a huge data breach in security platform BioStar 2.  

BioStar 2 is a web-based biometric security smart lock platform. A centralized application, it allows admins to control access to secure areas of facilities, manage user permissions, integrate with 3rd party security apps, and record activity logs.

As part of the biometric software, BioStar 2 uses facial recognition and fingerprinting technology to identify users.

The app is built by Suprema, one of the world’s top 50 security manufacturers, with the highest market share in biometric access control in the EMEA region. Suprema recently partnered with Nedap to integrate BioStar 2 into their AEOS access control system.

AEOS is used by over 5,700 organizations in 83 countries, including some of the biggest multinational businesses, many small local businesses, governments, banks, and even the UK Metropolitan Police.

The data leaked in the breach is of a highly sensitive nature. It includes detailed personal information of employees and unencrypted usernames and passwords, giving hackers access to user accounts and permissions at facilities using BioStar 2. Malicious agents could use this to hack into secure facilities and manipulate their security protocols for criminal activities. 

This is a huge leak that endangers both the businesses and organizations involved, as well as their employees. Our team was able to access over 1 million fingerprint records, as well as facial recognition information. Combined with the personal details, usernames, and passwords, the potential for criminal activity and fraud is massive. 

Once stolen, fingerprint and facial recognition information cannot be retrieved. An individual will potentially be affected for the rest of their lives.

[…]

Our team was able to access over 27.8 million records, a total of 23 gigabytes of data, which included the following information:

  • Access to client admin panels, dashboards, back end controls, and permissions
  • Fingerprint data
  • Facial recognition information and images of users
  • Unencrypted usernames, passwords, and user IDs
  • Records of entry and exit to secure areas
  • Employee records including start dates
  • Employee security levels and clearances
  • Personal details, including employee home address and emails
  • Businesses’ employee structures and hierarchies
  • Mobile device and OS information

[…]

With this leak, criminal hackers have complete access to admin accounts on BioStar 2. They can use this to take over a high-level account with complete user permissions and security clearances, and make changes to the security settings in an entire network. 

Not only can they change user permissions and lock people out of certain areas, but they can also create new user accounts – complete with facial recognition and fingerprints – to give themselves access to secure areas within a building or facility.

Furthermore, hackers can change the fingerprints of existing accounts to their own and hijack a user account to access restricted areas undetected. Hackers and other criminals could potentially create libraries of fingerprints to be used any time they want to enter somewhere without being detected.

This provides a hacker and their team open access to all restricted areas protected with BioStar 2. They also have access to activity logs, so they can delete or alter the data to hide their activities.

As a result, a hacked building’s entire security infrastructure becomes useless. Anybody with this data will have free movement to go anywhere they choose, undetected.

Source: Report: Data Breach in Biometric Security Platform Affecting Millions of Users

And there’s why biometrics are a poor choice in identification – you can’t change your fingertips, but you can edit the records. Using this data it should be fairly easy to print out fingerprints, if you can’t feel bothered to edit the database either.

Also Facebook Admits Yes, It Was Listening To Your Private Conversations via Messenger

“Much like Apple and Google, we paused human review of audio more than a week ago,” Facebook told Bloomberg on Tuesday.

The social media giant said that users could choose the option to have their voice chats on Facebook’s Messenger app transcribed. The contractors were testing artificial intelligence technology to make sure the messages were properly transcribed from voice to text.

Facebook has previously said that they are reading your messages on its Messenger App. Last year, Facebook CEO Mark Zuckerberg said that when “sensational messages” are found, “We stop those messages from going through.”

Zuckerberg also told Bloomberg last year that while conversations in the Messenger app are considered private, Facebook “scans them and uses the same tools to prevent abuse there that it does on the social network more generally.”

Source: Facebook Admits It Was Also Listening To Your Private Conversations | Digital Trends

 

Amazon, Google, Apple, Facebook – the five riders of the apocalypse are almost complete!

Ring Promised Swag to Users Who Narc on Their Neighbors

On top of turning their doorbell video feeds into a police surveillance network, Amazon’s home security subsidiary, Ring, also once tried to entice people with swag bags to snitch on their neighbors, Motherboard reported Friday.

The instructions are purportedly all laid out in a 2017 company presentation the publication obtained. Entitled “Digital Neighborhood Watch,” the slideshow apparently promised promo codes for Ring merch and other unspecified “swag” for those who formed watch groups, reported suspicious activity to the police, and raved about the device on social media. What qualifies as suspicious activity, you ask? According to the presentation, “strange vans and cars,” “people posing as utility workers,” and other dastardly deeds such as strolling down the street or peeping in car windows.

The slideshow goes on to outline monthly milestones for the group such as “Convert 10 new users” or ‘Solve a crime.” Meeting these goals would net the informant tiered Ring perks as if directing police scrutiny was a rewards program and not an act that can threaten people’s lives, particularly people of color.

These teams would have a “Neighborhood Manager,” a.k.a. a Ring employee, to help talk them through how to share their Ring footage with local officers. The presentation stated that if one of these groups of amateur sleuths succeeded in helping police solve a crime, each member would receive $50 off their next Ring purchase.

When asked about the presentation, a Ring spokesperson told Motherboard the program debuted before Amazon bought the company for a cool $1 billion last year. According to Motherboard, they also said it didn’t run for long:

“This particular idea was not rolled out widely and was discontinued in 2017. We will continue to invent, iterate, and innovate on behalf of our neighbors while aligning with our three pillars of customer privacy, security, and user control. Some of these ideas become official programs, and many others never make it past the testing phase.”

While Ring did eventually launch a neighborhood watch app, it doesn’t offer the same incentives this 2017 program promised, so choosing to narc on your neighbor won’t win you any $50 off coupons.

Ring has been the subject of mounting privacy concerns after reports from earlier this year revealed the company may have accidentally let its employees snoop on customers among other customer complaints. Earlier this week, the company also stated that it has partnerships with “over 225 law enforcement agencies,” in part to help cops figure out how to get their hands on users’ surveillance footage.

Source: Ring Promised Swag to Users Who Narc on Their Neighbors

This is just evil

Researchers accurately measure blood pressure using phone camera

A study led by University of Toronto researchers, published today in the American Heart Association journal Circulation: Cardiovascular Imaging, found that blood pressure can be measured accurately by taking a quick video selfie.

Kang Lee, a professor of applied psychology and human development at the Ontario Institute for Studies in Education and Canada Research Chair in developmental neuroscience, was the lead author of the study, working alongside researchers from the Faculty of Medicine’s department of physiology, and from Hangzhou Normal University and Zhejiang Normal University in China.

Using a technology co-discovered by Lee and his postdoctoral researcher Paul Zheng called transdermal optical imaging, researchers measured the blood pressure of 1,328 Canadian and Chinese adults by capturing two-minute videos of their faces on an iPhone. Results were compared to standard devices used to measure blood pressure.

The researchers found they were able to measure three types of blood pressure with 95 to 96 per cent accuracy.

[…]

Transdermal optical imaging works by capitalizing on the translucent nature of facial skin. When the light reaches the face, it penetrates the skin and reaches hemoglobin underneath it, which is red. This technology uses the optical sensor on a smartphone to capture the reflected red light from hemoglobin, which allows the technology to visualize and measure blood flow changes under the skin.

“From the video captured by the technology, you can see how the blood flows in different parts of the face and through this ebb and flow of blood in the face, you can get a lot of information,” says Lee.

He understood that the transdermal optical imaging technology had significant practical implications, so, with the help of U of T and MaRS, he formed a startup company called Nuralogix alongside entrepreneur Marzio Pozzuoli, who is now the CEO.

[…]

Nuralogix has developed a smartphone app called Anura that allows people to try out the transdermal optical imaging software for themselves. In the publicly available version of the app, people can record a 30-second video of their face and will receive measurements for stress levels and resting heart rate. In the fall, the company will release a version of the app in China that includes blood pressure measurements.

Lee says there is more research to be done to ensure that health measurements using transdermal optical imaging are as accurate as possible. In the recent study, for example, only people with regular or slightly higher blood pressure were measured. The study sample also did not have people with very dark or very fair skin. More diverse research subjects will make measurements more accurate, says Lee, but there are challenges when looking for people with very high and low blood pressure.

“In order to improve our app to make it usable, particularly for people with hypertension, we need to collect a lot of data from them, which is very, very hard because a lot of them are already taking medicine,” says Lee. “Ethically, we cannot tell them not to take medicine, but from time to time, we get participants who do not take medicine so we can get hypertensive and hypotensive people this way.”

While there are a wide range of applications for transdermal optical imaging technology, Lee says data privacy is of utmost concern. He says when a person uses the software by recording a video of their face, only the results are uploaded to the cloud but the video is not.

“We only extract blood flow information from your face and send that to the cloud. So from the cloud, if I look at your blood flow, I couldn’t tell it is you,” he says.

[…]

The research team also hopes to expand the capabilities of the technology to measure other health markers, including blood-glucose levels, hemoglobin and cholesterol.

Nuralogix plans on monetizing the technology by making an app that allows consumers to pay a low monthly fee to access more detailed health data. They are also licensing the technology through a product called DeepAffex, a cloud-based AI engine that can be used by businesses who are interested in the transdermal optical imaging technology in a range of industries from health care to security.

Source: Preventative health at your fingertips: U of T researchers accurately measure blood pressure using phone camera

Talk about unintended consequences: GDPR is an identity thief’s dream ticket to Europeans’ data

In a presentation at the Black Hat security conference in Las Vegas James Pavur, a PhD student at Oxford University who usually specialises in satellite hacking, explained how he was able to game the GDPR system to get all kinds of useful information on his fiancée, including credit card and social security numbers, passwords, and even her mother’s maiden name.

[…]

For social engineering purposes, GDPR has a number of real benefits, Pavur said. Firstly, companies only have a month to reply to requests and face fines of up to 4 per cent of revenues if they don’t comply, so fear of failure and time are strong motivating factors.

In addition, the type of people who handle GDPR requests are usually admin or legal staff, not security people used to social engineering tactics. This makes information gathering much easier.

Over the space of two months Pavur sent out 150 GDPR requests in his fiancée’s name, asking for all and any data on her. In all, 72 per cent of companies replied back, and 83 companies said that they had information on her.

Interestingly, five per cent of responses, mainly from large US companies, said that they weren’t liable to GDPR rules. They may be in for a rude shock if they have a meaningful presence in the EU and come before the courts.

Of the responses, 24 per cent simply accepted an email address and phone number as proof of identity and sent over any files they had on his fiancée. A further 16 per cent requested easily forged ID information and 3 per cent took the rather extreme step of simply deleting her accounts.

A lot of companies asked for her account login details as proof of identity, which is actually a pretty good idea, Pavur opined. But when one gaming company tried it, he simply said he’d forgotten the login and they sent it anyway.

The range of information the companies sent in is disturbing. An educational software company sent Pavur his fiancée’s social security number, date of birth and her mother’s maiden name. Another firm sent over 10 digits of her credit card number, the expiration date, card type and her postcode.

A threat intelligence company – not Have I been Pwned – sent over a list of her email addresses and passwords which had already been compromised in attacks. Several of these still worked on some accounts – Pavur said he has now set her up with a password manager to avoid repetition of this.

“An organisation she had never heard of, and never interacted with, had some of the most sensitive data about her,” he said. “GDPR provided a pretext for anyone in the world to collect that information.”

Fixing this issue is going to take action from both legislators and companies, Pavur said.

First off, lawmakers need to set a standard for what is a legitimate form of ID for GDPR requests. One rail company was happy to send out personal information, accepting a used envelope addressed to the fiancée as proof of identity.

Source: Talk about unintended consequences: GDPR is an identity thief’s dream ticket to Europeans’ data • The Register

Researchers Bypass Apple FaceID Using glasses to fool liveness detection

Researchers on Wednesday during Black Hat USA 2019 demonstrated an attack that allowed them to bypass a victim’s FaceID and log into their phone simply by putting a pair of modified glasses on their face. By merely placing tape carefully over the lenses of a pair glasses and placing them on the victim’s face the researchers demonstrated how they could bypass Apple’s FaceID in a specific scenario. The attack itself is difficult, given the bad actor would need to figure out how to put the glasses on an unconscious victim without waking them up.

To launch the attack, researchers with Tencent tapped into a feature behind biometrics called “liveness” detection, which is part of the biometric authentication process that sifts through “real” versus “fake” features on people.

[…]

Researchers specifically honed in on how liveness detection scans a user’s eyes. They discovered that the abstraction of the eye for liveness detection renders a black area (the eye) with a white point on it (the iris). And, they discovered that if a user is wearing glasses, the way that liveness detection scans the eyes changes.

“After our research we found weak points in FaceID… it allows users to unlock while wearing glasses… if you are wearing glasses, it won’t extract 3D information from the eye area when it recognizes the glasses.”

Putting these two factors together, researchers created a prototype of glasses – dubbed “X-glasses” – with black tape on the lenses, and white tape inside the black tape. Using this trick they were then able to unlock a victim’s mobile phone and then transfer his money through mobile payment App by placing the tape-attached glasses above the sleeping victim’s face to bypass the attention detection mechanism of both FaceID and other similar technologies.

The attack comes with obvious drawbacks – the victim must be unconscious, for one, and can’t wake up when the glasses are placed on their face.

Source: Researchers Bypass Apple FaceID Using Biometrics ‘Achilles Heel’ | Threatpost

Deep links to opt-out of data sharing by 60+ companies – Simple Opt Out

Simple Opt Out is drawing attention to opt-out data sharing and marketing practices that many people aren’t aware of (and most people don’t want), then making it easier to opt out. For example:

  • Target “may share your personal information with other companies which are not part of Target.”
  • Chase may share your “account balances and transaction history … For nonaffiliates to market to you.”
  • Crate & Barrel may share “your customer information [name, postal address and email address, and transactions you conduct on our Website or offline] with other select companies.”

This site makes it easier to opt out of data sharing by 50+ companies (or add a company, or see opt-out tips). Enjoy!

Source: Deep links to opt-out of data sharing by 60+ companies – Simple Opt Out

Mysterious, Ancient Radio Signals Keep Pelting Earth. Astronomers Designed an AI to Hunt Them Down..

Sudden shrieks of radio waves from deep space keep slamming into radio telescopes on Earth, spattering those instruments’ detectors with confusing data. And now, astronomers are using artificial intelligence to pinpoint the source of the shrieks, in the hope of explaining what’s sending them to Earth from — researchers suspect — billions of light-years across space.

Usually, these weird, unexplained signals are detected only after the fact, when astronomers notice out-of-place spikes in their data — sometimes years after the incident. The signals have complex, mysterious structures, patterns of peaks and valleys in radio waves that play out in just milliseconds. That’s not the sort of signal astronomers expect to come from a simple explosion, or any other one of the standard events known to scatter spikes of electromagnetic energy across space. Astronomers call these strange signals fast radio bursts (FRBs). Ever since the first one was uncovered in 2007, using data recorded in 2001, there’s been an ongoing effort to pin down their source. But FRBs arrive at random times and places, and existing human technology and observation methods aren’t well-primed to spot these signals.

Now, in a paper published July 4 in the journal Monthly Notices of the Royal Astronomical Society, a team of astronomers wrote that they managed to detect five FRBs in real time using a single radio telescope. [The 12 Strangest Objects in the Universe]

Wael Farah, a doctoral student at Swinburne University of Technology in Melbourne, Australia, developed a machine-learning system that recognized the signatures of FRBs as they arrived at the University of Sydney’s Molonglo Radio Observatory, near Canberra. As Live Science has previously reported, many scientific instruments, including radio telescopes, produce more data per second than they can reasonably store. So they don’t record anything in the finest detail except their most interesting observations.

Farah’s system trained the Molonglo telescope to spot FRBs and switch over to its most detailed recording mode, producing the finest records of FRBs yet.

Based on their data, the researchers predicted that between 59 and 157 theoretically detectable FRBs splash across our skies every day. The scientists also used the immediate detections to hunt for related flares in data from X-ray, optical and other radio telescopes — in hopes of finding some visible event linked to the FRBs — but had no luck.

Their research showed, however, that one of the most peculiar (and frustrating, for research purposes) traits of FRBs appears to be real: The signals, once arriving, never repeat themselves. Each one appears to be a singular event in space that will never happen again.

Source: Mysterious, Ancient Radio Signals Keep Pelting Earth. Astronomers Designed an AI to Hunt Them Down. | Live Science

Apple Is Locking iPhone Batteries to Discourage Repair, showing ominous errors if you replace your battery

By activating a dormant software lock on their newest iPhones, Apple is effectively announcing a drastic new policy: only Apple batteries can go in iPhones, and only they can install them.

If you replace the battery in the newest iPhones, a message indicating you need to service your battery appears in Settings > Battery, next to Battery Health. The “Service” message is normally an indication that the battery is degraded and needs to be replaced. The message still shows up when you put in a brand new battery, however. Here’s the bigger problem: our lab tests confirmed that even when you swap in a genuine Apple battery, the phone will still display the “Service” message.

It’s not a bug; it’s a feature Apple wants. Unless an Apple Genius or an Apple Authorized Service Provider authenticates a battery to the phone, that phone will never show its battery health and always report a vague, ominous problem.

Source: Apple Is Locking iPhone Batteries to Discourage Repair – iFixit

Skype, Cortana also have humans listening to you. The fine print says it listens to your audio recordings to improve its AI, but it means humans are listening.

If you use Skype’s AI-powered real-time translator, brief recordings of your calls may be passed to human contractors, who are expected to listen in and correct the software’s translations to improve it.

That means 10-second or so snippets of your sweet nothings, mundane details of life, personal information, family arguments, and other stuff discussed on Skype sessions via the translation feature may be eavesdropped on by strangers, who check the translations for accuracy and feed back any changes into the machine-learning system to retrain it.

An acknowledgement that this happens is buried in an FAQ for the translation service, which states:

To help the translation and speech recognition technology learn and grow, sentences and automatic transcripts are analyzed and any corrections are entered into our system, to build more performant services.

Microsoft reckons it is being transparent in the way it processes recordings of people’s Skype conversations. Yet one thing is missing from that above passage: humans. The calls are analyzed by humans. The more technological among you will have assumed living, breathing people are involved at some point in fine-tuning the code and may therefore have to listen to some call samples. However, not everyone will realize strangers are, so to speak, sticking a cup against the wall of rooms to get an idea of what’s said inside, and so it bears reiterating.

Especially seeing as sample recordings of people’s private Skype calls were leaked to Vice, demonstrating that the Windows giant’s security isn’t all that. “The fact that I can even share some of this with you shows how lax things are in terms of protecting user data,” one of the translation service’s contractors told the digital media monolith.

[…]

The translation contractors use a secure and confidential website provided by Microsoft to access samples awaiting playback and analysis, which are, apparently, scrubbed of any information that could identify those recorded and the devices used. For each recording, the human translators are asked to pick from a list of AI-suggested translations that potentially apply to what was overheard, or they can override the list and type in their own.

Also, the same goes for Cortana, Microsoft’s voice-controlled assistant: the human contractors are expected to listen to people’s commands to appraise the code’s ability to understand what was said. The Cortana privacy policy states:

When you use your voice to say something to Cortana or invoke skills, Microsoft uses your voice data to improve Cortana’s understanding of how you speak.

Buried deeper in Microsoft’s all-encompassing fine print is this nugget (with our emphasis):

We also share data with Microsoft-controlled affiliates and subsidiaries; with vendors working on our behalf; when required by law or to respond to legal process; to protect our customers; to protect lives; to maintain the security of our products; and to protect the rights and property of Microsoft and its customers.

[…]

Separately, spokespeople for the US tech titan claimed in an email to El Reg that users’ audio data is only collected and used after they opt in, however, as we’ve said, it’s not clear folks realize they are opting into letting strangers snoop on multi-second stretches of their private calls and Cortana commands. You can also control what voice data Microsoft obtains, and how to delete it, via a privacy dashboard, we were reminded.

In short, Redmond could just say flat out it lets humans pore over your private and sensitive calls and chats, as well as machine-learning software, but it won’t because it knows folks, regulators, and politicians would freak out if they knew the full truth.

This comes as Apple stopped using human contractors to evaluate people’s conversations with Siri, and Google came under fire in Europe for letting workers snoop on its smart speakers and assistant. Basically, as we’ve said, if you’re talking to or via an AI, you’re probably also talking to a person – and perhaps even the police.

Source: Reminder: When a tech giant says it listens to your audio recordings to improve its AI, it means humans are listening. Right, Skype? Cortana? • The Register

Genealogists running into AVG

The cards that are used to connect families in provinces in the Benelux as well as the family trees are published online are hugely anonymous, which means it’s nearly impossible to connect the dots as you don’t know when someone was born. Pictures and documents are being removed willy nilly from archives, in contravention of the archive laws (or openness laws, as they garauntee publication of data after a certain amount of time). Uncertainty about how far the AVG goes are leading people to take a very heavy handed view of it.

Source: Stamboomonderzoekers lopen tegen AVG aan – Emerce

Take-Two Sends Investigators To YouTuber’s House To Crack Down On Borderlands 3 Leaks – wait you can send your own police force to muscle on people in the USA? Kafka-esque experience follows with service shutdowns

After two weeks of no uploads, a notable Borderlands personality on YouTube returned to the platform yesterday with a video explaining his absence. He said that the game’s publisher Take-Two Interactive hit his channel with several copyright strikes and sent investigators to his home in response to months of Borderlands coverage on his channel, which included leaks about upcoming games in the series.

[…]

Take-Two subsidiary 2K Games, however, said the YouTuber’s actions were sometimes illegal and harmful to the Borderlands community. “The action we’ve taken is the result of a 10-month investigation and a history of this creator profiting from breaking our policies, leaking confidential information about our product, and infringing our copyrights,” a 2K Games rep said in a statement. “Not only were many of his actions illegal, but they were negatively impacting the experiences of other content creators and our fans in anticipation for the game.”

The company did not specify what it was that Somers did that they think broke the law.

Somers’ videos include playthroughs of the Borderlands series as well as tips, tricks, and an in-depth history series that explores the lore of the Borderlands universe. For the last year, Somers’ channel has also been home to Borderlands 3 leaks and speculation, which he always attributed to either unnamed sources or the work of a community of fans digging through SteamDB, a third-party data repository that shows the work being done behind-the-scenes to get games ready for the PC platform.

Wherever he was getting his information from, Somers got a lot of things right

[…]

In his return video, Somers goes into great detail about what happened to him, his YouTube channel, and his Discord server. Somers claims that on July 25, investigators showed up at his home in New Jersey and questioned him on behalf of the New York-based Take-Two Interactive, the parent company of Borderlands publisher 2K Games. He describes being tense due to strangers trespassing on his private property and regrets having spoken with them. Somers allegedly answered questions about his channel and various information he had previously reported on

[…]

his YouTube channel, which was later hit by seven copyright strikes he says Take-Two handed down following his visit from the private investigators. Since then, all but one of these copyright strikes have been removed from his channel, allowing it to remain live, although he’s unsure if this means they were rescinded by Take-Two or removed by YouTube.

In addition to the strikes against his YouTube channel, Somers says that his Discord server and his Discord account were terminated 20 minutes after the private investigators left. The explanation he got from an automated Discord email was that his account was “involved in selling, promoting, or distributing cheats, hacks, or cracked accounts.” He says that no information was provided as to who was behind this shutdown and denies that anything of the sort took place in his Discord server.

[…]

A rep for 2K Games, however, called the video “incomplete and in some cases untrue.” They noted that “Take-Two and 2K take the security and confidentiality of trade secrets very seriously,” adding that the company “will take the necessary actions to defend against leaks and infringement of our intellectual property that not only potentially impact our business and partners, but more importantly may negatively impact the experiences of our fans and customers.”

The rep declined to provide further information on Take-Two and 2K’s investigation.

Source: Take-Two Sends Investigators To YouTuber’s House To Crack Down On Borderlands 3 Leaks

What really gets me here is the callous way in which he was booted from several services (YouTube and Discord) with no idea why or how to fix his problem or how they were fixed in the end. It’s the same black hole Amazon sellers live of in terror. These services are now  too big to allow them to get away with “it’s a free service and you can choose not to use them” – there are no viable alternatives. The creation of the rules and enforcement of these rules cannot be left in the hands of entities that are solely interested in profit.

A reminder why Open Source is so important: Someone audited Kubernetes

The Cloud Native Computing Foundation (CNCF) today released a security audit of Kubernetes, the widely used container orchestration software, and the findings are about what you’d expect for a project with about two million lines of code: there are plenty of flaws that need to be addressed.

The CNCF engaged two security firms, Trail of Bits and Atredis Partners, to poke around Kubernetes code over the course of four months. The companies looked at Kubernetes components involved in networking, cryptography, authentication, authorization, secrets management, and multi-tenancy.

Having identified 34 vulnerabilities – 4 high severity, 15 medium severity, 8 low severity and 7 informational severity – the Trail of Bits report advises project developers to rely more on standard libraries, to avoid custom parsers and specialized configuration systems, to choose “sane defaults,” and to ensure correct filesystem and kernel interactions prior to performing operations.

“The assessment team found configuration and deployment of Kubernetes to be non-trivial, with certain components having confusing default settings, missing operational controls, and implicitly designed security controls,” the Trail of Bits report revealed. “Also, the state of the Kubernetes codebase has significant room for improvement.”

Underscoring these findings, Kubernetes 1.13.9, 1.14.5, and 1.15.2 were released on Monday to fix two security issues in the software, CVE-2019-11247 and CVE-2019-11249. The former could allow a user in one namespace to access a resource scoped to a cluster. The latter could allow a malicious container to create or replace a file on the client computer when the client employs the kubectl cp command.

As noted by the CNCF, the security auditors found: policy application inconsistencies, which prompt a false sense of security; insecure TLS used by default; environmental variables and command-line arguments that reveal credentials; secrets leaked in logs; no support for certificate revocation, and seccomp (a system-call filtering mechanism in the Linux kernel) not activated by default.

The findings include advice to cluster admins, such as not using both Role-Based Access Controls and Attribute-Based Access Controls because of the potential for inadvertent permission grants if one of these fails.

They also include various recommendations and best practices for developers to follow as they continue making contributions to Kubernetes.

For example, one recommendation is to avoid hardcoding file paths to dependencies. The report points to Kubernetes’ kublet process, “where a dependency on hardcoded paths for PID files led to a race condition which could allow an attacker to escalate privileges.”

The report also advises enforcing minimum files permissions, monitoring processes on Linux, and various other steps to make Kubernetes more secure.

In an email to The Register, Chris Aniszczyk, CTO and COO of CNCF, expressed satisfaction with the audit process. “We view it positively that the whole process of doing a security audit was handled transparently by the members of the Kubernetes Security Audit WG, from selecting a vendor to working with the upstream project,” he said. “I don’t know of any other open source organization that has shared and open sourced the whole process around a security audit and the results. Transparency builds trust in open source communities, especially around security.”

Asked how he’d characterize the risks present in Kubernetes at the moment, Aniszczyk said, “The Kubernetes developers responded quickly and created appropriate CVEs for critical issues. In the end, we would rather have the report speak for itself in terms of the findings and recommendations.”

Source: Captain, we’ve detected a disturbance in space-time. It’s coming from Earth. Someone audited the Kubernetes source • The Register

Why is this good? Because these holes will be fixed instead of exploited.

Amazon’s Ring Is Teaching Cops How to Persuade Customers to Hand Over Surveillance Footage

according to a new report, Ring is also instructing cops on how to persuade customers to hang over surveillance footage even when they aren’t responsive to police requests.

According to a police memo obtained by Gizmodo and reported last week, Ring has partnerships with “over 225 law enforcement agencies,” Ring is actively involved in scripting and approving how police communicate those partnerships. As part of these relationships, Ring helps police obtain surveillance footage both by alerting customers in a given area that footage is needed and by asking to “share videos” with police. In a disclaimer included with the alerts, Ring claims that sharing the footage “is absolutely your choice.”

But according to documents and emails obtained by Motherboard, Ring also instructed police from two departments in New Jersey on how best to coax the footage out of Ring customers through its “neighborhood watch” app Neighbors in situations where police requests for video were not being met, including by providing police with templates for requests and by encouraging them to post often on the Neighbors app as well as on social media.

In one such email obtained by Motherboard, a Bloomfield Police Department detective requested advice from a Ring associate on how best to obtain videos after his requests were not being answered and further asked whether there was “anything that we can blast out to encourage Ring owners to share the videos when requested.”

In this email correspondence, the Ring associate informed the detective that a significant part of customer “opt in for video requests is based on the interaction law enforcement has with the community,” adding that the detective had done a “great job interacting with [community members] and this will be critical in regard to increased opt in rate.”

“The more users you have the more useful information you can collect,” the associate wrote.

Ring did not immediately return our request for comment about the practice of instructing police how to better obtain surveillance footage from its own customers. However, a spokesperson told Motherboard in a statement that the company “offers Neighbors app trainings and best practices for posting and engaging with app users for all law enforcement agencies utilizing the portal tool,” including by providing “templates and educational materials for police departments to utilize at their discretion.”

In addition to Gizmodo’s recent report that Ring is carefully controlling the messaging and implementation of its products with its police departments, a report from GovTech on Friday claimed that Amazon is also helping police work around denied requests by customers to supply their Ring footage. In such instances, according to the report, police can approach Ring’s parent company Amazon, which can provide the footage that police deem vital to an investigation.

“If we ask within 60 days of the recording and as long as it’s been uploaded to the cloud, then Ring can take it out of the cloud and send it to us legally so that we can use it as part of our investigation,” Tony Botti, public information officer for the Fresno County Sheriff’s Office, told GovTech. When contacted by Gizmodo, however, a Ring spokesperson denied this.

Source: Amazon’s Ring Is Teaching Cops How to Persuade Customers to Hand Over Surveillance Footage

Must. Surveill. The. People.

Democratic Senate campaign group exposed 6.2 million Americans’ emails

Data breach researchers at security firm UpGuard found the data in late July, and traced the storage bucket back to a former staffer at the Democratic Senatorial Campaign Committee, an organization that seeks grassroots donations and contributions to help elect Democratic candidates to the U.S. Senate.

Following the discovery, UpGuard researchers reached out to the DSCC and the storage bucket was secured within a few hours. The researchers shared their findings exclusively with TechCrunch and published their findings.

The spreadsheet was titled “EmailExcludeClinton.csv” and was found in a similarly named unprotected Amazon S3 bucket without a password. The file was uploaded in 2010 — a year after former Democratic senator and presidential candidate Hillary Clinton, whom the data is believed to be named after, became secretary of state.

UpGuard said the data may be people “who had opted out or should otherwise be excluded” from the committee’s marketing.

screenshot

A redacted portion of the email spreadsheet (Image: UpGuard/supplied)

Stewart Boss, a spokesperson for the DSCC, denied the data came from Sen. Hillary Clinton’s campaign and claimed the data had been created using the committee’s own information.

“A spreadsheet from nearly a decade ago that was created for fundraising purposes was removed in compliance with the stringent protocols we now have in place,” he told TechCrunch in an email.

Despite several follow-ups, the spokesperson declined to say how the email addresses were collected, where the information came from, what the email addresses were used for, how long the bucket was exposed, or if the committee knew if anyone else accessed or obtained the data.

We also contacted the former DSCC staffer who owned the storage bucket and allegedly created the database, but did not hear back.

Most of the email addresses were from consumer providers, like AOL, Yahoo, Hotmail and Gmail, but the researchers found more than 7,700 U.S. government email addresses and 3,400 U.S. military email addresses, said the UpGuard researchers.

The DSCC security lapse is the latest in a string of data exposures in recent years — some of which were also discovered by UpGuard. Two incidents in 2015 and 2017 exposed 191 million and 198 million Americans’ voter data, respectively, including voter profiles and political persuasions. Last year, 14 million voter records on Texas residents were also found on an exposed server.

Source: Democratic Senate campaign group exposed 6.2 million Americans’ emails | TechCrunch

And Amazon is still not putting these buckets up secured by default.