You might be asking what’s inside this tiny USB cable to make it susceptible to such attacks. That’s the trick: inside the shell of the USB ‘A’ connector is a PCB loaded up with a WiFi microcontroller — the documentation doesn’t say which one — that will send payloads over the USB device. Think of it as a BadUSB device, like the USB Rubber Ducky from Hak5, but one that you can remote control. It is the ultimate way into a system, and all anyone has to do is plug a random USB cable into their computer.
In the years BadUSB — an exploit hidden in a device’s USB controller itself — was released upon the world, [MG] has been tirelessly working on making his own malicious USB device, and now it’s finally ready. The O.MG cable hides a backdoor inside the shell of a standard, off-the-shelf USB cable.
The construction of this device is quite impressive, in that it fits entirely inside a USB plug. But this isn’t a just a PCB from a random Chinese board house: [MG] spend 300 hours and $4000 in the last month putting this project together with a Bantam mill and created his own PCBs, with silk screen. That’s impressive no matter how you cut it.
Google Health’s so-called augmented-reality microscope has proven surprisingly accurate at detecting and diagnosing cancerous tumors in real time.
The device is essentially a standard microscope decked out with two extra components: a camera, and a computer running AI software with an Nvidia Titan Xp GPU to accelerate the number crunching. The camera continuously snaps images of body tissue placed under microscope, and passes these images to a convolutional neural network on the computer to analyze. In return, the neural net spits out, in real time allegedly, a heatmap of the cells in the image, labeling areas that are benign and abnormal on the screen for doctors to inspect.
Google’s eggheads tried using the device to detect the presence of cancer in samples of breast and prostate cells. The algorithms had a performance score of 0.92 when detecting cancerous lymph nodes in breast cancer and 0.93 for prostate cancer, with one being a perfect score, so it’s not too bad for what they describe as a proof of concept.
Details of the microscope system have been described in a paper published in Nature this week. The training data for breast cancer was taken from here, and here for prostate cancer. Some of the training data was reserved for inference testing.
The device is a pretty challenging system to build: it requires a processing pipeline that can handle, on the fly, microscope snaps that are high resolution enough to capture details at the cellular level. The size of the images used in this experiment measure 5,120 × 5,120 pixels. That’s much larger than what’s typically used for today’s deep learning algorithms, which have millions of parameters and require billions of floating-point operations just to process images as big as 300 pixels by 300 pixels.
Typically, it’s thought that we perceive harmful sensations on our skin entirely through the very sensitive endings of certain nerve cells. These nerve cells aren’t coated by a protective layer of myelin, as other types are. Nerve cells are kept alive by and connected to other cells called glia; outside of the central nervous system, one of the two major types of glia are called Schwann cells.
An illustration of nociceptive Schwann cells
Illustration: Abdo, et al (Science)
The authors of the new study, published Thursday in Science, say they were studying these helper cells near the skin’s surface in the lab when they came across something strange—some of the Schwann cells seemed to form an extensive “mesh-like network” with their nerve cells, differently than how they interact with nerve cells elsewhere. When they ran further experiments with mice, they found evidence that these Schwann cells play a direct, added role in pain perception, or nociception.
One experiment, for instance, involved breeding mice with these cells in their paws that could be activated when the mice were exposed to light. Once the light came on, the mice seemed to behave like they were in pain, such as by licking themselves or guarding their paws. Later experiments found that these cells—since dubbed nociceptive Schwann cells by the team—respond to mechanical pain, like being pricked or hit by something, but not to cold or heat.
Because these cells are spread throughout the skin as an intricately connected system, the authors argue that the system should be considered an organ.
“Our study shows that sensitivity to pain does not occur only in the skin’s nerve [fibers], but also in this recently discovered pain-sensitive organ,” said senior study author Patrik Ernfors, a pain researcher at Sweden’s Karolinska Institute, in a release from the university.
Led by internet privacy researchers Noam Rotem and Ran Locar, vpnMentor’s team recently discovered a huge data breach in security platform BioStar 2.
BioStar 2 is a web-based biometric security smart lock platform. A centralized application, it allows admins to control access to secure areas of facilities, manage user permissions, integrate with 3rd party security apps, and record activity logs.
As part of the biometric software, BioStar 2 uses facial recognition and fingerprinting technology to identify users.
The app is built by Suprema, one of the world’s top 50 security manufacturers, with the highest market share in biometric access control in the EMEA region. Suprema recently partnered with Nedap to integrate BioStar 2 into their AEOS access control system.
AEOS is used by over 5,700 organizations in 83 countries, including some of the biggest multinational businesses, many small local businesses, governments, banks, and even the UK Metropolitan Police.
The data leaked in the breach is of a highly sensitive nature. It includes detailed personal information of employees and unencrypted usernames and passwords, giving hackers access to user accounts and permissions at facilities using BioStar 2. Malicious agents could use this to hack into secure facilities and manipulate their security protocols for criminal activities.
This is a huge leak that endangers both the businesses and organizations involved, as well as their employees. Our team was able to access over 1 million fingerprint records, as well as facial recognition information. Combined with the personal details, usernames, and passwords, the potential for criminal activity and fraud is massive.
Once stolen, fingerprint and facial recognition information cannot be retrieved. An individual will potentially be affected for the rest of their lives.
[…]
Our team was able to access over 27.8 million records, a total of 23 gigabytes of data, which included the following information:
Access to client admin panels, dashboards, back end controls, and permissions
Fingerprint data
Facial recognition information and images of users
Unencrypted usernames, passwords, and user IDs
Records of entry and exit to secure areas
Employee records including start dates
Employee security levels and clearances
Personal details, including employee home address and emails
Businesses’ employee structures and hierarchies
Mobile device and OS information
[…]
With this leak, criminal hackers have complete access to admin accounts on BioStar 2. They can use this to take over a high-level account with complete user permissions and security clearances, and make changes to the security settings in an entire network.
Not only can they change user permissions and lock people out of certain areas, but they can also create new user accounts – complete with facial recognition and fingerprints – to give themselves access to secure areas within a building or facility.
Furthermore, hackers can change the fingerprints of existing accounts to their own and hijack a user account to access restricted areas undetected. Hackers and other criminals could potentially create libraries of fingerprints to be used any time they want to enter somewhere without being detected.
This provides a hacker and their team open access to all restricted areas protected with BioStar 2. They also have access to activity logs, so they can delete or alter the data to hide their activities.
As a result, a hacked building’s entire security infrastructure becomes useless. Anybody with this data will have free movement to go anywhere they choose, undetected.
And there’s why biometrics are a poor choice in identification – you can’t change your fingertips, but you can edit the records. Using this data it should be fairly easy to print out fingerprints, if you can’t feel bothered to edit the database either.
Facebook outsourced contractors to listen in on your audio messenger chats and transcribe them, a new report reveals.
Bloomberg reports that the contractors were not told why they were listening in or why they were transcribing them. Facebook confirmed the reports but said they are no longer transcribing audio.
“Much like Apple and Google, we paused human review of audio more than a week ago,” Facebook told Bloomberg on Tuesday.
The social media giant said that users could choose the option to have their voice chats on Facebook’s Messenger app transcribed. The contractors were testing artificial intelligence technology to make sure the messages were properly transcribed from voice to text.
Facebook has previously said that they are reading your messages on its Messenger App. Last year, Facebook CEO Mark Zuckerberg said that when “sensational messages” are found, “We stop those messages from going through.”
Zuckerberg also told Bloomberg last year that while conversations in the Messenger app are considered private, Facebook “scans them and uses the same tools to prevent abuse there that it does on the social network more generally.”
On top of turning their doorbell video feeds into a police surveillance network, Amazon’s home security subsidiary, Ring, also once tried to entice people with swag bags to snitch on their neighbors, Motherboard reported Friday.
The instructions are purportedly all laid out in a 2017 company presentation the publication obtained. Entitled “Digital Neighborhood Watch,” the slideshow apparently promised promo codes for Ring merch and other unspecified “swag” for those who formed watch groups, reported suspicious activity to the police, and raved about the device on social media. What qualifies as suspicious activity, you ask? According to the presentation, “strange vans and cars,” “people posing as utility workers,” and other dastardly deeds such as strolling down the street or peeping in car windows.
The slideshow goes on to outline monthly milestones for the group such as “Convert 10 new users” or ‘Solve a crime.” Meeting these goals would net the informant tiered Ring perks as if directing police scrutiny was a rewards program and not an act that can threaten people’s lives, particularly people of color.
These teams would have a “Neighborhood Manager,” a.k.a. a Ring employee, to help talk them through how to share their Ring footage with local officers. The presentation stated that if one of these groups of amateur sleuths succeeded in helping police solve a crime, each member would receive $50 off their next Ring purchase.
When asked about the presentation, a Ring spokesperson told Motherboard the program debuted before Amazon bought the company for a cool $1 billion last year. According to Motherboard, they also said it didn’t run for long:
“This particular idea was not rolled out widely and was discontinued in 2017. We will continue to invent, iterate, and innovate on behalf of our neighbors while aligning with our three pillars of customer privacy, security, and user control. Some of these ideas become official programs, and many others never make it past the testing phase.”
While Ring did eventually launch a neighborhood watch app, it doesn’t offer the same incentives this 2017 program promised, so choosing to narc on your neighbor won’t win you any $50 off coupons.
Ring has been the subject of mounting privacy concerns after reports from earlier this year revealed the company may have accidentally let its employees snoop on customers among other customer complaints. Earlier this week, the company also stated that it has partnerships with “over 225 law enforcement agencies,” in part to help cops figure out how to get their hands on users’ surveillance footage.
Kang Lee, a professor of applied psychology and human development at the Ontario Institute for Studies in Education and Canada Research Chair in developmental neuroscience, was the lead author of the study, working alongside researchers from the Faculty of Medicine’s department of physiology, and from Hangzhou Normal University and Zhejiang Normal University in China.
Using a technology co-discovered by Lee and his postdoctoral researcher Paul Zheng called transdermal optical imaging, researchers measured the blood pressure of 1,328 Canadian and Chinese adults by capturing two-minute videos of their faces on an iPhone. Results were compared to standard devices used to measure blood pressure.
The researchers found they were able to measure three types of blood pressure with 95 to 96 per cent accuracy.
[…]
Transdermal optical imaging works by capitalizing on the translucent nature of facial skin. When the light reaches the face, it penetrates the skin and reaches hemoglobin underneath it, which is red. This technology uses the optical sensor on a smartphone to capture the reflected red light from hemoglobin, which allows the technology to visualize and measure blood flow changes under the skin.
“From the video captured by the technology, you can see how the blood flows in different parts of the face and through this ebb and flow of blood in the face, you can get a lot of information,” says Lee.
He understood that the transdermal optical imaging technology had significant practical implications, so, with the help of U of T and MaRS, he formed a startup company called Nuralogix alongside entrepreneur Marzio Pozzuoli, who is now the CEO.
[…]
Nuralogix has developed a smartphone app called Anura that allows people to try out the transdermal optical imaging software for themselves. In the publicly available version of the app, people can record a 30-second video of their face and will receive measurements for stress levels and resting heart rate. In the fall, the company will release a version of the app in China that includes blood pressure measurements.
Lee says there is more research to be done to ensure that health measurements using transdermal optical imaging are as accurate as possible. In the recent study, for example, only people with regular or slightly higher blood pressure were measured. The study sample also did not have people with very dark or very fair skin. More diverse research subjects will make measurements more accurate, says Lee, but there are challenges when looking for people with very high and low blood pressure.
“In order to improve our app to make it usable, particularly for people with hypertension, we need to collect a lot of data from them, which is very, very hard because a lot of them are already taking medicine,” says Lee. “Ethically, we cannot tell them not to take medicine, but from time to time, we get participants who do not take medicine so we can get hypertensive and hypotensive people this way.”
While there are a wide range of applications for transdermal optical imaging technology, Lee says data privacy is of utmost concern. He says when a person uses the software by recording a video of their face, only the results are uploaded to the cloud but the video is not.
“We only extract blood flow information from your face and send that to the cloud. So from the cloud, if I look at your blood flow, I couldn’t tell it is you,” he says.
[…]
The research team also hopes to expand the capabilities of the technology to measure other health markers, including blood-glucose levels, hemoglobin and cholesterol.
Nuralogix plans on monetizing the technology by making an app that allows consumers to pay a low monthly fee to access more detailed health data. They are also licensing the technology through a product called DeepAffex, a cloud-based AI engine that can be used by businesses who are interested in the transdermal optical imaging technology in a range of industries from health care to security.
In a presentation at the Black Hat security conference in Las Vegas James Pavur, a PhD student at Oxford University who usually specialises in satellite hacking, explained how he was able to game the GDPR system to get all kinds of useful information on his fiancée, including credit card and social security numbers, passwords, and even her mother’s maiden name.
[…]
For social engineering purposes, GDPR has a number of real benefits, Pavur said. Firstly, companies only have a month to reply to requests and face fines of up to 4 per cent of revenues if they don’t comply, so fear of failure and time are strong motivating factors.
In addition, the type of people who handle GDPR requests are usually admin or legal staff, not security people used to social engineering tactics. This makes information gathering much easier.
Over the space of two months Pavur sent out 150 GDPR requests in his fiancée’s name, asking for all and any data on her. In all, 72 per cent of companies replied back, and 83 companies said that they had information on her.
Interestingly, five per cent of responses, mainly from large US companies, said that they weren’t liable to GDPR rules. They may be in for a rude shock if they have a meaningful presence in the EU and come before the courts.
Of the responses, 24 per cent simply accepted an email address and phone number as proof of identity and sent over any files they had on his fiancée. A further 16 per cent requested easily forged ID information and 3 per cent took the rather extreme step of simply deleting her accounts.
A lot of companies asked for her account login details as proof of identity, which is actually a pretty good idea, Pavur opined. But when one gaming company tried it, he simply said he’d forgotten the login and they sent it anyway.
The range of information the companies sent in is disturbing. An educational software company sent Pavur his fiancée’s social security number, date of birth and her mother’s maiden name. Another firm sent over 10 digits of her credit card number, the expiration date, card type and her postcode.
A threat intelligence company – not Have I been Pwned – sent over a list of her email addresses and passwords which had already been compromised in attacks. Several of these still worked on some accounts – Pavur said he has now set her up with a password manager to avoid repetition of this.
“An organisation she had never heard of, and never interacted with, had some of the most sensitive data about her,” he said. “GDPR provided a pretext for anyone in the world to collect that information.”
Fixing this issue is going to take action from both legislators and companies, Pavur said.
First off, lawmakers need to set a standard for what is a legitimate form of ID for GDPR requests. One rail company was happy to send out personal information, accepting a used envelope addressed to the fiancée as proof of identity.
Researchers on Wednesday during Black Hat USA 2019 demonstrated an attack that allowed them to bypass a victim’s FaceID and log into their phone simply by putting a pair of modified glasses on their face. By merely placing tape carefully over the lenses of a pair glasses and placing them on the victim’s face the researchers demonstrated how they could bypass Apple’s FaceID in a specific scenario. The attack itself is difficult, given the bad actor would need to figure out how to put the glasses on an unconscious victim without waking them up.
To launch the attack, researchers with Tencent tapped into a feature behind biometrics called “liveness” detection, which is part of the biometric authentication process that sifts through “real” versus “fake” features on people.
[…]
Researchers specifically honed in on how liveness detection scans a user’s eyes. They discovered that the abstraction of the eye for liveness detection renders a black area (the eye) with a white point on it (the iris). And, they discovered that if a user is wearing glasses, the way that liveness detection scans the eyes changes.
“After our research we found weak points in FaceID… it allows users to unlock while wearing glasses… if you are wearing glasses, it won’t extract 3D information from the eye area when it recognizes the glasses.”
Putting these two factors together, researchers created a prototype of glasses – dubbed “X-glasses” – with black tape on the lenses, and white tape inside the black tape. Using this trick they were then able to unlock a victim’s mobile phone and then transfer his money through mobile payment App by placing the tape-attached glasses above the sleeping victim’s face to bypass the attention detection mechanism of both FaceID and other similar technologies.
The attack comes with obvious drawbacks – the victim must be unconscious, for one, and can’t wake up when the glasses are placed on their face.
Simple Opt Out is drawing attention to opt-out data sharing and marketing practices that many people aren’t aware of (and most people don’t want), then making it easier to opt out. For example:
Target “may share your personal information with other companies which are not part of Target.”
Chase may share your “account balances and transaction history … For nonaffiliates to market to you.”
Crate & Barrel may share “your customer information [name, postal address and email address, and transactions you conduct on our Website or offline] with other select companies.”
This site makes it easier to opt out of data sharing by 50+ companies (or add a company, or see opt-out tips). Enjoy!
Sudden shrieks of radio waves from deep space keep slamming into radio telescopes on Earth, spattering those instruments’ detectors with confusing data. And now, astronomers are using artificial intelligence to pinpoint the source of the shrieks, in the hope of explaining what’s sending them to Earth from — researchers suspect — billions of light-years across space.
Usually, these weird, unexplained signals are detected only after the fact, when astronomers notice out-of-place spikes in their data — sometimes years after the incident. The signals have complex, mysterious structures, patterns of peaks and valleys in radio waves that play out in just milliseconds. That’s not the sort of signal astronomers expect to come from a simple explosion, or any other one of the standard events known to scatter spikes of electromagnetic energy across space. Astronomers call these strange signals fast radio bursts (FRBs). Ever since the first one was uncovered in 2007, using data recorded in 2001, there’s been an ongoing effort to pin down their source. But FRBs arrive at random times and places, and existing human technology and observation methods aren’t well-primed to spot these signals.
Wael Farah, a doctoral student at Swinburne University of Technology in Melbourne, Australia, developed a machine-learning system that recognized the signatures of FRBs as they arrived at the University of Sydney’s Molonglo Radio Observatory, near Canberra. As Live Science has previously reported, many scientific instruments, including radio telescopes, produce more data per second than they can reasonably store. So they don’t record anything in the finest detail except their most interesting observations.
Farah’s system trained the Molonglo telescope to spot FRBs and switch over to its most detailed recording mode, producing the finest records of FRBs yet.
Based on their data, the researchers predicted that between 59 and 157 theoretically detectable FRBs splash across our skies every day. The scientists also used the immediate detections to hunt for related flares in data from X-ray, optical and other radio telescopes — in hopes of finding some visible event linked to the FRBs — but had no luck.
Their research showed, however, that one of the most peculiar (and frustrating, for research purposes) traits of FRBs appears to be real: The signals, once arriving, never repeat themselves. Each one appears to be a singular event in space that will never happen again.
By activating a dormant software lock on their newest iPhones, Apple is effectively announcing a drastic new policy: only Apple batteries can go in iPhones, and only they can install them.
If you replace the battery in the newest iPhones, a message indicating you need to service your battery appears in Settings > Battery, next to Battery Health. The “Service” message is normally an indication that the battery is degraded and needs to be replaced. The message still shows up when you put in a brand new battery, however. Here’s the bigger problem: our lab tests confirmed that even when you swap in a genuine Apple battery, the phone will still display the “Service” message.
It’s not a bug; it’s a feature Apple wants. Unless an Apple Genius or an Apple Authorized Service Provider authenticates a battery to the phone, that phone will never show its battery health and always report a vague, ominous problem.
If you use Skype’s AI-powered real-time translator, brief recordings of your calls may be passed to human contractors, who are expected to listen in and correct the software’s translations to improve it.
That means 10-second or so snippets of your sweet nothings, mundane details of life, personal information, family arguments, and other stuff discussed on Skype sessions via the translation feature may be eavesdropped on by strangers, who check the translations for accuracy and feed back any changes into the machine-learning system to retrain it.
To help the translation and speech recognition technology learn and grow, sentences and automatic transcripts are analyzed and any corrections are entered into our system, to build more performant services.
Microsoft reckons it is being transparent in the way it processes recordings of people’s Skype conversations. Yet one thing is missing from that above passage: humans. The calls are analyzed by humans. The more technological among you will have assumed living, breathing people are involved at some point in fine-tuning the code and may therefore have to listen to some call samples. However, not everyone will realize strangers are, so to speak, sticking a cup against the wall of rooms to get an idea of what’s said inside, and so it bears reiterating.
Especially seeing as sample recordings of people’s private Skype calls were leaked to Vice, demonstrating that the Windows giant’s security isn’t all that. “The fact that I can even share some of this with you shows how lax things are in terms of protecting user data,” one of the translation service’s contractors told the digital media monolith.
[…]
The translation contractors use a secure and confidential website provided by Microsoft to access samples awaiting playback and analysis, which are, apparently, scrubbed of any information that could identify those recorded and the devices used. For each recording, the human translators are asked to pick from a list of AI-suggested translations that potentially apply to what was overheard, or they can override the list and type in their own.
Also, the same goes for Cortana, Microsoft’s voice-controlled assistant: the human contractors are expected to listen to people’s commands to appraise the code’s ability to understand what was said. The Cortana privacy policy states:
When you use your voice to say something to Cortana or invoke skills, Microsoft uses your voice data to improve Cortana’s understanding of how you speak.
Buried deeper in Microsoft’s all-encompassing fine print is this nugget (with our emphasis):
We also share data with Microsoft-controlled affiliates and subsidiaries; with vendors working on our behalf; when required by law or to respond to legal process; to protect our customers; to protect lives; to maintain the security of our products; and to protect the rights and property of Microsoft and its customers.
[…]
Separately, spokespeople for the US tech titan claimed in an email to El Reg that users’ audio data is only collected and used after they opt in, however, as we’ve said, it’s not clear folks realize they are opting into letting strangers snoop on multi-second stretches of their private calls and Cortana commands. You can also control what voice data Microsoft obtains, and how to delete it, via a privacy dashboard, we were reminded.
In short, Redmond could just say flat out it lets humans pore over your private and sensitive calls and chats, as well as machine-learning software, but it won’t because it knows folks, regulators, and politicians would freak out if they knew the full truth.
This comes as Apple stopped using human contractors to evaluate people’s conversations with Siri, and Google came under fire in Europe for letting workers snoop on its smart speakers and assistant. Basically, as we’ve said, if you’re talking to or via an AI, you’re probably also talking to a person – and perhaps even the police.
The cards that are used to connect families in provinces in the Benelux as well as the family trees are published online are hugely anonymous, which means it’s nearly impossible to connect the dots as you don’t know when someone was born. Pictures and documents are being removed willy nilly from archives, in contravention of the archive laws (or openness laws, as they garauntee publication of data after a certain amount of time). Uncertainty about how far the AVG goes are leading people to take a very heavy handed view of it.
After two weeks of no uploads, a notable Borderlands personality on YouTube returned to the platform yesterday with a video explaining his absence. He said that the game’s publisher Take-Two Interactive hit his channel with several copyright strikes and sent investigators to his home in response to months of Borderlands coverage on his channel, which included leaks about upcoming games in the series.
[…]
Take-Two subsidiary 2K Games, however, said the YouTuber’s actions were sometimes illegal and harmful to the Borderlands community. “The action we’ve taken is the result of a 10-month investigation and a history of this creator profiting from breaking our policies, leaking confidential information about our product, and infringing our copyrights,” a 2K Games rep said in a statement. “Not only were many of his actions illegal, but they were negatively impacting the experiences of other content creators and our fans in anticipation for the game.”
The company did not specify what it was that Somers did that they think broke the law.
Somers’ videos include playthroughs of the Borderlands series as well as tips, tricks, and an in-depth history series that explores the lore of the Borderlands universe. For the last year, Somers’ channel has also been home to Borderlands 3 leaks and speculation, which he always attributed to either unnamed sources or the work of a community of fans digging through SteamDB, a third-party data repository that shows the work being done behind-the-scenes to get games ready for the PC platform.
Wherever he was getting his information from, Somers got a lot of things right
[…]
In his return video, Somers goes into great detail about what happened to him, his YouTube channel, and his Discord server. Somers claims that on July 25, investigators showed up at his home in New Jersey and questioned him on behalf of the New York-based Take-Two Interactive, the parent company of Borderlands publisher 2K Games. He describes being tense due to strangers trespassing on his private property and regrets having spoken with them. Somers allegedly answered questions about his channel and various information he had previously reported on
[…]
his YouTube channel, which was later hit by seven copyright strikes he says Take-Two handed down following his visit from the private investigators. Since then, all but one of these copyright strikes have been removed from his channel, allowing it to remain live, although he’s unsure if this means they were rescinded by Take-Two or removed by YouTube.
In addition to the strikes against his YouTube channel, Somers says that his Discord server and his Discord account were terminated 20 minutes after the private investigators left. The explanation he got from an automated Discord email was that his account was “involved in selling, promoting, or distributing cheats, hacks, or cracked accounts.” He says that no information was provided as to who was behind this shutdown and denies that anything of the sort took place in his Discord server.
[…]
A rep for 2K Games, however, called the video “incomplete and in some cases untrue.” They noted that “Take-Two and 2K take the security and confidentiality of trade secrets very seriously,” adding that the company “will take the necessary actions to defend against leaks and infringement of our intellectual property that not only potentially impact our business and partners, but more importantly may negatively impact the experiences of our fans and customers.”
The rep declined to provide further information on Take-Two and 2K’s investigation.
What really gets me here is the callous way in which he was booted from several services (YouTube and Discord) with no idea why or how to fix his problem or how they were fixed in the end. It’s the same black hole Amazon sellers live of in terror. These services are now too big to allow them to get away with “it’s a free service and you can choose not to use them” – there are no viable alternatives. The creation of the rules and enforcement of these rules cannot be left in the hands of entities that are solely interested in profit.
The Cloud Native Computing Foundation (CNCF) today released a security audit of Kubernetes, the widely used container orchestration software, and the findings are about what you’d expect for a project with about two million lines of code: there are plenty of flaws that need to be addressed.
The CNCF engaged two security firms, Trail of Bits and Atredis Partners, to poke around Kubernetes code over the course of four months. The companies looked at Kubernetes components involved in networking, cryptography, authentication, authorization, secrets management, and multi-tenancy.
Having identified 34 vulnerabilities – 4 high severity, 15 medium severity, 8 low severity and 7 informational severity – the Trail of Bits report advises project developers to rely more on standard libraries, to avoid custom parsers and specialized configuration systems, to choose “sane defaults,” and to ensure correct filesystem and kernel interactions prior to performing operations.
“The assessment team found configuration and deployment of Kubernetes to be non-trivial, with certain components having confusing default settings, missing operational controls, and implicitly designed security controls,” the Trail of Bits report revealed. “Also, the state of the Kubernetes codebase has significant room for improvement.”
Underscoring these findings, Kubernetes 1.13.9, 1.14.5, and 1.15.2 were released on Monday to fix two security issues in the software, CVE-2019-11247 and CVE-2019-11249. The former could allow a user in one namespace to access a resource scoped to a cluster. The latter could allow a malicious container to create or replace a file on the client computer when the client employs the kubectl cp command.
As noted by the CNCF, the security auditors found: policy application inconsistencies, which prompt a false sense of security; insecure TLS used by default; environmental variables and command-line arguments that reveal credentials; secrets leaked in logs; no support for certificate revocation, and seccomp (a system-call filtering mechanism in the Linux kernel) not activated by default.
The findings include advice to cluster admins, such as not using both Role-Based Access Controls and Attribute-Based Access Controls because of the potential for inadvertent permission grants if one of these fails.
They also include various recommendations and best practices for developers to follow as they continue making contributions to Kubernetes.
For example, one recommendation is to avoid hardcoding file paths to dependencies. The report points to Kubernetes’ kublet process, “where a dependency on hardcoded paths for PID files led to a race condition which could allow an attacker to escalate privileges.”
The report also advises enforcing minimum files permissions, monitoring processes on Linux, and various other steps to make Kubernetes more secure.
In an email to The Register, Chris Aniszczyk, CTO and COO of CNCF, expressed satisfaction with the audit process. “We view it positively that the whole process of doing a security audit was handled transparently by the members of the Kubernetes Security Audit WG, from selecting a vendor to working with the upstream project,” he said. “I don’t know of any other open source organization that has shared and open sourced the whole process around a security audit and the results. Transparency builds trust in open source communities, especially around security.”
Asked how he’d characterize the risks present in Kubernetes at the moment, Aniszczyk said, “The Kubernetes developers responded quickly and created appropriate CVEs for critical issues. In the end, we would rather have the report speak for itself in terms of the findings and recommendations.”
according to a new report, Ring is also instructing cops on how to persuade customers to hang over surveillance footage even when they aren’t responsive to police requests.
According to a police memo obtained by Gizmodo and reported last week, Ring has partnerships with “over 225 law enforcement agencies,” Ring is actively involved in scripting and approving how police communicate those partnerships. As part of these relationships, Ring helps police obtain surveillance footage both by alerting customers in a given area that footage is needed and by asking to “share videos” with police. In a disclaimer included with the alerts, Ring claims that sharing the footage “is absolutely your choice.”
But according to documents and emails obtained by Motherboard, Ring also instructed police from two departments in New Jersey on how best to coax the footage out of Ring customers through its “neighborhood watch” app Neighbors in situations where police requests for video were not being met, including by providing police with templates for requests and by encouraging them to post often on the Neighbors app as well as on social media.
In one such email obtained by Motherboard, a Bloomfield Police Department detective requested advice from a Ring associate on how best to obtain videos after his requests were not being answered and further asked whether there was “anything that we can blast out to encourage Ring owners to share the videos when requested.”
In this email correspondence, the Ring associate informed the detective that a significant part of customer “opt in for video requests is based on the interaction law enforcement has with the community,” adding that the detective had done a “great job interacting with [community members] and this will be critical in regard to increased opt in rate.”
“The more users you have the more useful information you can collect,” the associate wrote.
Ring did not immediately return our request for comment about the practice of instructing police how to better obtain surveillance footage from its own customers. However, a spokesperson told Motherboard in a statement that the company “offers Neighbors app trainings and best practices for posting and engaging with app users for all law enforcement agencies utilizing the portal tool,” including by providing “templates and educational materials for police departments to utilize at their discretion.”
In addition to Gizmodo’s recent report that Ring is carefully controlling the messaging and implementation of its products with its police departments, a report from GovTech on Friday claimed that Amazon is also helping police work around denied requests by customers to supply their Ring footage. In such instances, according to the report, police can approach Ring’s parent company Amazon, which can provide the footage that police deem vital to an investigation.
“If we ask within 60 days of the recording and as long as it’s been uploaded to the cloud, then Ring can take it out of the cloud and send it to us legally so that we can use it as part of our investigation,” Tony Botti, public information officer for the Fresno County Sheriff’s Office, told GovTech. When contacted by Gizmodo, however, a Ring spokesperson denied this.
Data breach researchers at security firm UpGuard found the data in late July, and traced the storage bucket back to a former staffer at the Democratic Senatorial Campaign Committee, an organization that seeks grassroots donations and contributions to help elect Democratic candidates to the U.S. Senate.
Following the discovery, UpGuard researchers reached out to the DSCC and the storage bucket was secured within a few hours. The researchers shared their findings exclusively with TechCrunch and published their findings.
The spreadsheet was titled “EmailExcludeClinton.csv” and was found in a similarly named unprotected Amazon S3 bucket without a password. The file was uploaded in 2010 — a year after former Democratic senator and presidential candidate Hillary Clinton, whom the data is believed to be named after, became secretary of state.
UpGuard said the data may be people “who had opted out or should otherwise be excluded” from the committee’s marketing.
A redacted portion of the email spreadsheet (Image: UpGuard/supplied)
Stewart Boss, a spokesperson for the DSCC, denied the data came from Sen. Hillary Clinton’s campaign and claimed the data had been created using the committee’s own information.
“A spreadsheet from nearly a decade ago that was created for fundraising purposes was removed in compliance with the stringent protocols we now have in place,” he told TechCrunch in an email.
Despite several follow-ups, the spokesperson declined to say how the email addresses were collected, where the information came from, what the email addresses were used for, how long the bucket was exposed, or if the committee knew if anyone else accessed or obtained the data.
We also contacted the former DSCC staffer who owned the storage bucket and allegedly created the database, but did not hear back.
Most of the email addresses were from consumer providers, like AOL, Yahoo, Hotmail and Gmail, but the researchers found more than 7,700 U.S. government email addresses and 3,400 U.S. military email addresses, said the UpGuard researchers.
The DSCC security lapse is the latest in a string of data exposures in recent years — some of which were also discovered by UpGuard. Two incidents in 2015 and 2017 exposed 191 million and 198 million Americans’ voter data, respectively, including voter profiles and political persuasions. Last year, 14 million voter records on Texas residents were also found on an exposed server.
The developers of cutesy Animal Crossing–Pokemon mashup Ooblets just had a weekend from hell. After trying to preempt a tidal wave of rage over their newly announced Epic Games Store exclusivity, they got hit with a swirling tsunami of foaming-at-the-mouth anger, up to and including death threats and anti-Semitic hoaxes. This is the worst overreaction to an Epic deal that’s yet been publicized. It’s also part of a larger trend that the video game industry has let run rampant for far too long.
Today, Ooblets designer Ben Wasser published a lengthy Medium post about the harassment that he and his sole teammate at development studio Glumberland, programmer/artist Rebecca Cordingley, have been subjected to. In it, he discussed in detail what he’s only alluded to before, showing numerous screenshots of threatening, often racist and sexist abuse and pointing to coordinated efforts to storm the Ooblets Discord and propagate fabricated messages that made it look like Wasser said anti-Semitic things about gamers. In part, he blamed the tone of his tongue-in-cheek announcement post for this, saying that while it’s the tone the Ooblets team has been using to communicate with fans since day one, it was a “stupid miscalculation on my part.”
It is, on no uncertain terms, insane to expect that anyone might have to deal with a reaction like this because of some slight snark in a post about what is to them very good news. Actually, let’s just sit with that last point for a second: If you’re a fan of Ooblets, the Epic Store announcement is fantastic news; no, you don’t get to play it on Steam, and yes, the Epic Store is a weird, janky ghost town of a thing that’s improving at an alarmingly slow rate, but thanks to Epic’s funding, Ooblets and the studio making it are now guaranteed to survive. Thrive, even, thanks to additional staff and resources. You’ve got to download another (free) client to play it, but you get the best possible version of the game you were looking forward to, and its creators get to keep eating, which is something that I’ve heard keeps people alive.
And yet, in reaction to this, people went ballistic, just like they have so many times before. This is our default now. Every tiny pinprick slight is a powder keg. Developers may as well have lit matches taped to their fingers, because any perceived “wrong” move is enough to set off an explosive consumer revolt. And make no mistake, the people going after Ooblets were not fans, as evidenced by the fact that, according to Wasser, they didn’t even know how the game’s Patreon worked. Instead, they were self-described “consumers” and “potential customers” who felt like the game’s mere existence granted them some impossibly huge stake in its future. Wasser talked about this in his post:
“We’ve been told nonstop throughout this about how we must treat ‘consumers’ or ‘potential customers’ a certain way,” he said. “I understand the relationship people think they might be owed when they exchange money for goods or services, but the people using the terms consumers and potential customers here are doing so specifically because we’ve never actually sold them anything and don’t owe them anything at all… Whenever I’ve mentioned that we, as random people happening to be making a game, don’t owe these other random people anything, they become absolutely enraged. Some of the most apparently incendiary screenshots of things I’ve said are all along these lines.”
We need to face facts: This kind of mentality is a major force in video game culture. This is what a large number of people believe, and they use it as a justification to carry out sustained abuse and harassment. “When presented with the reality of the damage inflicted, we’ve seen countless people effectively say ‘you were asking for it,’” said Wasser. “According to that logic, anything anyone says that could rub someone the wrong way is cause for the internet to try to ruin their life. Either that, or our role as two people who had the nerve to make a video game made us valid targets in their minds.”
Things reached this deranged fever pitch, in part, because companies kowtowed to an increasingly caustic and abusive consumer culture, frequently chalking explosive overreactions up to “passion” and other ostensibly virtuous qualities. This culture, to be fair, is not always out of line (see: loot boxes, exploitative pricing from big publishers, and big companies generally behaving in questionable ways), but it frequently takes aim at individuals who have no actual power and contains people who are not opposed to using reprehensible mob tactics to achieve their goals—or just straight up deploying consumer-related concerns as an excuse to heap abuse on people and groups they hate. While the concerns, targets, and participants are not always the same, it’s hard to ignore that many of these mob tactics were pioneered and refined on places like 4chan and 8chan, and by movements like Gamergate—other pernicious elements that the gaming industry has widely failed to condemn (and has even engaged with, in some cases).
In the world of PC gaming, Valve is the biggest example of a company that utterly failed to keep its audience in check. Valve spent years lingering in the shadows, resolutely remaining hands-off until everything caught on fire and even the metaphorical “This is fine” dog could no longer ignore the writing on the wall. Or the company got sued. In this environment, PC gamers developed an oppositional relationship with game makers. Groups sprung up to police what they perceived as sketchy games—but, inevitably, they ended up going after perfectly legitimate developers, too. Users flooded forums when they were upset about changes to games or political stances or whatever else, with Valve leaving moderation to often-understaffed development teams instead of putting its foot down against abuse. Review bombs became a viable tactic to tank games’ sales, and for a time, any game that ran afoul of the larger PC gaming consumer culture saw its score reduced to oblivion, with users dropping bombs over everything from pricing decisions to women and trans people in games.
Smaller developers, utterly lacking in systemic or institutional support, were forced to respond to these attacks, granting them credibility. The tactics worked, so people kept using them, their cause justified by the overarching idea that many developers are “lazy” and disingenuous—when, in reality, game development is mind-bogglingly difficult and takes time. Recently, Valve has begun totake aim at some of these issues, but the damage is already done.
Whether unknowingly or out of malice, Valve went on to fire the starting gun for this same audience to start giving Epic Store developers trouble. When publisher Deep Silver announced that Metro Exodus would be an Epic Store exclusive, Valve published a note on the game’s Steam store page calling the move “unfair.” Inevitably, Steam review bombs of previous games in the series followed, as did harassment of individual developers and even the author of the books on which the Metro video game series is based. Soon, this became a pattern when any relatively high-profile game headed toward Epic’s (at least temporarily) greener pastures.
That brings us to Ooblets. The game’s developers are facing astounding abuse over what is—in the grand scheme of life, or even just media platforms—a minor change of scenery. But they’re not backing down.
“I recognize that none of this post equates to an apology in any way that a lot of the mob is trying to obtain, and that’s by design,” Wasser wrote in his Medium post. “While some of what I’ve said was definitely bad for PR, I stand behind it. A portion of the gaming community is indeed horrendously toxic, entitled, immature, irrationally-angry, and prone to joining hate mobs over any inconsequential issue they can cook up. That was proven again through this entire experience. It was never my intention to alienate or antagonize anyone in our community who does not fit that description, and I hope that you can see my tone and pointed comments were not directed at you.”
And while Epic is, at the end of the day, an industry titan deserving of some of the scrutiny that gets hurled its way, it’s at least taking a stand instead of washing its hands of the situation like Valve and other big companies have for so long.
“The announcement of Ooblets highlighted a disturbing trend which is growing and undermining healthy public discourse, and that’s the coordinated and deliberate creation and promotion of false information, including fake screenshots, videos, and technical analysis, accompanied by harassment of partners, promotion of hateful themes, and intimidation of those with opposing views,” Epic said in a statement yesterday, concluding that it plans to “steadfastly support our partners throughout these challenges.”
So far, it seems like the company has been true to its word. “A lot of companies would’ve left us to deal with all of this on our own, but Epic has been by our side as our world has gone sideways,” said Wasser. “The fact that they care so much about a team and game as small as us proves to us that we made the right call in working with them, and we couldn’t be more thankful.”
That’s a step in the right direction, and hopefully one that other companies will follow. But the gaming industry has allowed this problem to grow and grow and grow over the course of many years, and it’s hard to see a future in which blowups like this don’t remain a regular occurrence. In his post, Wasser faced this sad reality.
“I hope that laying all this out helps in some way to lessen what pain is brought against whoever the next targets are, because we sadly know there will be many,” he said. “You should have opinions, disagree with things, make arguments, but don’t try to ruin people’s lives or jump on the bandwagon when it’s happening. What happened to us is the result of people forgetting their humanity for the sake of participating in video game drama. Please have a little perspective before letting your mild annoyance lead to deeply hurting a fellow human being.”
Twee T-shirts ‘n’ merch purveyor CafePress had 23 million user records swiped – reportedly back in February – and this morning triggered a mass password reset, calling it a change in internal policy.
Details of the security breach emerged when infosec researcher Troy Hunt’s Have I Been Pwned service – which lists websites known to have been hacked, allowing people to check if their information has been stolen – began firing out emails to affected people in the small hours of this morning.
According to HIBP, a grand total of 23,205,290 CafePress customers’ data was swiped by miscreants, including email addresses, names, phone numbers, and physical addresses.
We have asked CafePress to explain itself and will update this article if the company responds. There was no indication on its UK or US websites at the time of writing to indicate that the firm had acknowledged any breach.
[…]
Musing on the 77 per cent of email addresses from the breach having been seen in previous HIBP reports, Woodward said that factoid “brings me to a problem that isn’t being discussed that much, and which this kind of breach does highlight: the use of email as the user name. It’s clearly meant to make life easier for users, but the trouble is once hackers know an email has been used as a username in one place it is instantly useful for mounting credential-stuffing attacks elsewhere.”
“I wonder,” he told The Register, “if we shouldn’t be using unique usernames and passwords for each site. However, it would mean that it becomes doubly difficult to keep track of your credentials, especially if you’re using different strong passwords for each site, which I hope they are. But all users need do is start using a password manager, which I really wish they would.”
Last week, online sneaker-trading platform StockX asked its users to reset their passwords due to “recently completed system updates on the StockX platform.” In actuality, the company suffered a large data breach back in May, and only finally came clean about it when pressed by reporters who had access to some of the leaked data.
In other words, StockX lied. And while it disclosed details on the breach in the end, there’s still no explanation for why it took StockX so long to figure out what happened, nor why the company felt the need to muddy the situation with its suspicious password-reset email last week.
While most companies are fairly responsible about security disclosures, there’s no question that plenty would prefer if information about massive security breaches affecting them never hit the public eye. And even when companies have to disclose the details of a breach, they can get cagey—as we saw with Capital One’s recent problems.
Sadly it’s partially understandable, considering the lawsuit shotguns brought to bear on companies following disclosure.
Having said that, many of the disclosures are the results of really really stupid mistakes, such as storing credentials in plain text and not securing AWS buckets.
Amazon constantly scans rivals’ prices to see if they’re lower. When it discovers a product is cheaper on, say, Walmart.com, Amazon alerts the company selling the item and then makes the product harder to find and buy on its own marketplace — effectively penalizing the merchant. In many cases, the merchant opts to raise the price on the rival site rather than risk losing sales on Amazon.
Pricing alerts reviewed by Bloomberg show Amazon doesn’t explicitly tell sellers to raise prices on other sites, and the goal may be to push them to lower their prices on Amazon. But in interviews, merchants say they’re so hemmed in by rising costs levied by Amazon and reliant on sales on its marketplace, that they’re more likely to raise their prices elsewhere.
Antitrust experts say the Amazon policy is likely to attract scrutiny from Congress and the Federal Trade Commission, which recently took over jurisdiction of the Seattle-based company. So far, criticism of Amazon’s market power has centered on whether it mines merchants’ sales data to launch competing products and then uses its dominance to make the original product harder to find on its marketplace. Harming consumers by prompting merchants to raise prices on other sites more neatly fits the traditional definition of antitrust behavior in the U.S.
“Monopolization charges are always about business conduct that causes harm in a market,” said Jennifer Rie, an analyst at Bloomberg Intelligence who specializes in antitrust litigation. “It could end up being considered illegal conduct because people who prefer to shop on Walmart end up having to pay a higher price.”
[…]
Online merchants typically sell their products on multiple websites, including Amazon, EBay Inc. and Walmart Inc., which also removes products with “highly uncompetitive” prices compared with those on other sites. But merchants often generate most of their revenue on Amazon, which now accounts for almost 40% of online sales in the U.S., according to EMarketer.
Merchants have long complained that Amazon wields outsize influence over their businesses. Besides paying higher fees, many now have to buy advertising to stand out on the increasingly cluttered site. Some report giving Amazon 40% or more of each transaction, up from 20% a few years ago.
[…]
Amazon began sending the price alerts in 2017, and merchants say they have increased in frequency amid an intensifying price war between Amazon and Walmart. Merchants receive the alerts via a web platform they use to manage their Amazon businesses. The alerts show the product, the price on Amazon and the price found elsewhere on the web. They don’t name the competing site with a lower price; the merchants must find that themselves.
A typical pricing alert reads: “One or more of your offers is currently ineligible for being a featured offer on the product detail page because those items are priced higher on Amazon than at other retailers.”
In plain English, that means merchants lose the prominent “buy now” button that simplifies shopping on Amazon. With that icon missing, shoppers can still buy the products, but it’s a more tedious and unfamiliar process, which can hurt sales
[…]
“Amazon is in control of the price, not the merchant,” said Boyce, who runs Avenue 7 Media.
Molson Hart, who sells toys online through his company Viahart, typifies the challenge. Hart says more than 98% of his $4 million in 2018 sales came from Amazon even though he also sells his products on EBay, Walmart and his own website. He was trying to sell a toy stuffed tiger for $150 on Amazon. Hart designs, manufactures, imports, stores and ships the item to customers; Amazon would get $40 for listing some photographs on its website, handling the payment and charging Hart to advertise the product on the site.
Hart said he could sell the product for about $40 less on his own website, but won’t since that would jeopardize his sales on Amazon due to its pricing enforcement, he said. “If we sell our products for less on channels outside Amazon and Amazon detects this, our products will not appear as prominently in search,” he wrote in a recent article on Medium. Hart has since lowered the price of the tigers on Amazon and is now selling them at a loss.
Amazon used to require that merchants offer their best prices on Amazon as terms for selling on the site, but the agreement attracted the attention of regulators bent on ensuring competition. Amazon removed the requirement for sellers in Europe in 2013 following investigations and quietly removed the requirement without explanation for U.S. sellers in March shortly after Democratic presidential hopeful Senator Elizabeth Warren announced a goal of breaking up Amazon and other big tech companies.
[…]
Michael Kades, a former FTC attorney who now researches antitrust issues at the Washington Center for Equitable Growth, says the price alerts will almost certainly draw the government’s attention. “If regulators can prove that this conduct is causing merchants to raise prices on other platforms,” he said, “Amazon loses the argument that their policies are all about giving everyone lower prices.”
As I say in my talk, Break it Up! monopolistic behaviour is a lot more than just pricing – just this sort of anti-competitive pressure on third parties is one of the more maffia style sort
Trendy online-only Brit bank Monzo is telling hundreds of thousands of its customers to pick a new PIN – after it discovered it was storing their codes as plain-text in log files.
As a result, 480,000 folks, a fifth of the bank’s customers, now have to go to a cash machine, and reset their PINs.
The bank said the numbers, normally tightly secured with extremely limited access, had accidentally been kept in an encrypted-at-rest log file. The content of those logs were, however, accessible to roughly 100 Monzo engineers who normally would not have the clearance nor any need to see customer PINs.
The PINs were logged for punters who had used the “card number reminder” and “cancel a standing order” features.
To hear Monzo tell it, the misconfigured logs, along with the PINs, were discovered on Friday evening. By Saturday morning, the UK bank updated its mobile app so that no new PINs were sent to the log collector. On Monday, the last of the logged data had been deleted.