Consumer printer makers have long used the razor blade business model—so named after companies who sell razor handles for cheap, but the compatible replacement blades at much higher prices.
[…]
The advent of devices like smartphones and even social media have made sharing photos digitally much easier, which means consumers are printing photos less and less. That has had an effect on the profitability of home printers
[…]
Leacraft, who is named as the plaintiff in a class-action complaint against Canon filed in a U.S. federal court in New York last week, found that their Canon Pixma MG6320 all-in-one printer would no longer scan or fax documents when it was out of ink, despite neither of those functions requiring any printing at all. According to Bleeping Computer, it’s an issue that dates back to at least 2016 when other customers reported the same problem to Canon through the company’s online forums, and were told by the company’s support people that all the ink cartridges must be installed and contain ink to use all of the printer’s features.
[…]
The complaint points out that Canon promotes its all-in-one printers as having multiple distinct features, including printing, copying, scanning, and sometimes even faxing, but without any warnings that those features are dependent on sufficient levels of ink being available.
There are two classes of merchant on Amazon.com: those who get special protection from counterfeiters and those who don’t. From a report: The first category includes sellers of some big-name brands, such as Adidas, Apple and even Amazon itself. They benefit from digital fortifications that prevent unauthorized sellers from listing certain products — an iPhone, say, or eero router — for sale. Many lesser-known brands belong to the second group and have no such shield. Fred Ruckel, inventor of a popular cat toy called the Ripple Rug, is one of those sellers. A few months ago, knockoff artists began selling versions of his product, siphoning off tens of thousands of dollars in sales and forcing him to spend weeks trying have the interlopers booted off the site.
Amazon’s marketplace has long been plagued with fakes, a scourge that has made household names like Nike leery of putting their products there. While most items can be uploaded freely to the site, Amazon by 2016 had begun requiring would-be sellers of a select group of products to get permission to list them. The company doesn’t publicize the program, but in the merchant community it has become known as “brand gating.” Of the millions of products sold on Amazon, perhaps thousands are afforded this kind of protection, people who advise sellers say. Most merchants, many of them small businesses, rely on Amazon’s algorithms to ferret out fakes before they appear — an automated process that dedicated scammers have managed to evade.
The wait is over. It’s now possible to encrypt your WhatsApp chat history on both Android and iOS, Facebook CEO Mark Zuckerberg announced on Thursday. The company plans to roll out the feature slowly to ensure it can deliver a consistent and reliable experience to all users.
However, once you can access the feature, it will allow you to secure your backups before they hit iCloud or Google Drive. At that point, neither WhatsApp nor your cloud service provider will be able to access the files. It’s also worth mentioning you won’t be able to recover your backups if you ever lose the 64-digit encryption key that secures your chat logs. That said, it’s also possible to secure your backups behind a password, in which case you can recover that if you ever lose it.
While WhatsApp has allowed users to securely message each other since 2016, it only started testing encrypted backups earlier this year. With today’s announcement, the company said it has taken the final step toward providing a full end-to-end encrypted messaging experience.
It’s worth pointing out that end-to-end encryption doesn’t guarantee your privacy will be fully protected. According to a report The Informationpublished in August, Facebook was looking into an AI that could analyze encrypted data without having to decrypt it so that it could serve ads based on that information. The head of WhatsApp denied the report, but it’s a reminder that there’s more to privacy than merely the existence of end-to-end encryption.
More than 240 metro stations across Moscow now allow passengers to pay for a ride by looking at a camera. The Moscow metro has launched what authorities say is the first mass-scale deployment of a facial recognition payment system. According to The Guardian, passengers can access the payment option called FacePay by linking their photo, bank card and metro card to the system via the Mosmetro app. “Now all passengers will be able to pay for travel without taking out their phone, Troika or bank card,” Moscow mayor Sergey Sobyanin tweeted.
In the official Moscow website’s announcement, the country’s Department of Transport said all Face Pay information will be encrypted. The cameras at the designated turnstyles will read a passenger’s biometric key only, and authorities said information collected for the system will be stored in data centers that can only be accessed by interior ministry staff. Moscow’s Department of Information Technology has also assured users that photographs submitted to the system won’t be handed over to the cops.
Still, privacy advocates are concerned over the growing use of facial recognition in the city. Back in 2017, officials added facial recognition tech to the city’s 170,000 security cameras as part of its efforts to ID criminals on the street. Activists filed a case against Moscow’s Department of Technology a few years later in hopes of convincing the courts to ban the use of the technology. However, a court in Moscow sided with the city, deciding that its use of facial recognition does not violate the privacy of citizens. Reuters reported earlier this year, though, that those cameras were also used to identify protesters who attended rallies.
Stanislav Shakirov, the founder of Roskomsvoboda, a group that aims to protect Russians’ digital rights, said in a statement:
“We are moving closer to authoritarian countries like China that have mastered facial technology. The Moscow metro is a government institution and all the data can end up in the hands of the security services.”
Meanwhile, the European Parliament called on lawmakers in the EU earlier this month to ban automated facial recognition in public spaces. It cited evidence that facial recognition AI can still misidentify PoCs, members of the LGBTI+ community, seniors and women at higher rates. In the US, local governments are banning the use of the technology in public spaces, including statewide bans by Massachusetts and Maine. Four Democratic lawmakers also proposed a bill to ban the federal government from using facial recognition.
Chinese astronauts began Saturday their six-month mission on China’s first permanent space station, after successfully docking aboard their spacecraft.
The astronauts, two men and a woman, were seen floating around the module before speaking via a live-streamed video.
[…]
The space travelers’ Shenzhou-13 spacecraft was launched by a Long March-2F rocket at 12:23 a.m. Saturday and docked with the Tianhe core module of the space station at 6:56 a.m.
The three astronauts entered the station’s core module at about 10 a.m., the China Manned Space Agency said.
They are the second crew to move into China’s Tiangong space station, which was launched last April. The first crew stayed three months.
[…]
The crew will do three spacewalks to install equipment in preparation for expanding the station, assess living conditions in the Tianhe module, and conduct experiments in space medicine and other fields.
China’s military-run space program plans to send multiple crews to the station over the next two years to make it fully functional.
When completed with the addition of two more sections—named Mengtian and Wentian—the station will weigh about 66 tons, much smaller than the International Space Station, which launched its first module in 1998 and weighs around 450 tons.
A Missouri politician has been relentlessly mocked on Twitter after demanding the prosecution of a journalist who found and responsibly reported a vulnerability in a state website.
Mike Parson, governor of Missouri, described reporters for local newspaper the St Louis Post Dispatch (SLPD) as “hackers” after they discovered a web app for the state’s Department of Elementary and Secondary Education was leaking teachers’ private information.
Around 100,000 social security numbers were able to be exposed when the web app was loaded in a user’s browser. The public-facing app was intended to be used by local schools to check teachers’ professional registration status. So users could tell between different teachers of the same name, it would accept the last four digits of a teacher’s social security number as a valid search string.
It appears that in the background, the app was retrieving the entire social security number and exposing it to the end user.
The SLPD discovered this by viewing a search results page’s source code. “View source” has been a common feature of web browsers for years, typically available by right-clicking anywhere on a webpage and selecting it from a menu.
SLPD reporters told the Missouri Department of Education about the flaw and held off publicising it so officials could fix it – but that wasn’t good enough for the governor.
“The state is committed to bring to justice anyone who hacked our system and anyone who aided and abetted them to do so,” Parson said, according to the Missouri Independent news website. He justified his bizarre outburst by saying the SLPD was “attempting to embarrass the state and sell headlines for their news outlet.”
After two years of offering car insurance to drivers across California, Tesla’s officially bringing a similar offering to clientele in its new home state of Texas. As Electrek first reported, the big difference between the two is how drivers’ premiums are calculated: in California, the prices were largely determined by statistical evaluations. In Texas, your insurance costs will be calculated in real-time, based on your driving behavior.
Tesla says it grades this behavior using the “Safety Score” feature—the in-house metric designed by the company in order to estimate a driver’s chance of future collision. These scores were recently rolled out in order to screen drivers that were interested in testing out Tesla’s “Full Self Driving” software, which, like the Safety Score itself, is currently in beta. And while the self-driving software release date is, um, kind of up in the air for now, Tesla drivers in the lone-star state can use their safety score to apply for quotes on Tesla’s website as of today.
As Tesla points out in its own documents, relying on a single score makes the company a bit of an outlier in the car insurance market. Most traditional insurers round up a driver’s costs based on a number of factors that are wholly unrelated to their actual driving: depending on the state, this can include age, gender, occupation, and credit score, all playing a part in defining how much a person’s insurance might cost.
Tesla, on the other hand, relies on a single score, which the company says get tallied up based on five different factors: the number of forward-collision warnings you get every 1,000 miles, the number of times you “hard brake,” how often you take too-fast turns, how closely you drive behind other drivers, and how often they take their hands off the wheel when Autopilot is engaged.
A new research paper written by a team of academics and computer scientists from Spain and Austria has demonstrated that it’s possible to use Facebook’s targeting tools to deliver an ad exclusively to a single individual if you know enough about the interests Facebook’s platform assigns them.
The paper — entitled “Unique on Facebook: Formulation and Evidence of (Nano)targeting Individual Users with non-PII Data” — describes a “data-driven model” that defines a metric showing the probability a Facebook user can be uniquely identified based on interests attached to them by the ad platform.
The researchers demonstrate that they were able to use Facebook’s Ads manager tool to target a number of ads in such a way that each ad only reached a single, intended Facebook user.
The OAK-D is an open-source, full-color depth sensing camera with embedded AI capabilities, and there is now a crowdfunding campaign for a newer, lighter version called the OAK-D Lite. The new model does everything the previous one could do, combining machine vision with stereo depth sensing and an ability to run highly complex image processing tasks all on-board, freeing the host from any of the overhead involved.
The OAK-D Lite camera is actually several elements together in one package: a full-color 4K camera, two greyscale cameras for stereo depth sensing, and onboard AI machine vision processing with Intel’s Movidius Myriad X processor. Tying it all together is an open-source software platform called DepthAI that wraps the camera’s functions and capabilities together into a unified whole.
The goal is to give embedded systems access to human-like visual perception in real-time, which at its core means detecting things, and identifying where they are in physical space. It does this with a combination of traditional machine vision functions (like edge detection and perspective correction), depth sensing, and the ability to plug in pre-trained convolutional neural network (CNN) models for complex tasks like object classification, pose estimation, or hand tracking in real-time.
So how is it used? Practically speaking, the OAK-D Lite is a USB device intended to be plugged into a host (running any OS), and the team has put a lot of work into making it as easy as possible. With the help of a downloadable application, the hardware can be up and running with examples in about half a minute. Integrating the device into other projects or products can be done in Python with the help of the DepthAI SDK, which provides functionality with minimal coding and configuration (and for more advanced users, there is also a full API for low-level access). Since the vision processing is all done on-board, even a Raspberry Pi Zero can be used effectively as a host.
There’s one more thing that improves the ease-of-use situation, and that’s the fact that support for the OAK-D Lite (as well as the previous OAK-D) has been added to a software suite called the Cortic Edge Platform (CEP). CEP is a block-based visual coding system that runs on a Raspberry Pi, and is aimed at anyone who wants to rapidly prototype with AI tools in a primarily visual interface, providing yet another way to glue a project together.
Authorities in the United Arab Emirates have requested the US Department of Justice’s help in probing a case involving a bank manager who was swindled into transferring $35m to criminals by someone using a fake AI-generated voice.
The employee received a call to move the company-owned funds by someone purporting to be a director from the business. He also previously saw emails that showed the company was planning to use the money for an acquisition, and had hired a lawyer to coordinate the process. When the sham director instructed him to transfer the money, he did so thinking it was a legitimate request.
But it was all a scam, according to US court documents reported by Forbes. The criminals used “deep voice technology to simulate the voice of the director,” it said. Now officials from the UAE have asked the DoJ to hand over details of two US bank accounts, where over $400,000 from the stolen money were deposited.
Investigators believe there are at least 17 people involved in the heist.
Amazon.com Inc has been repeatedly accused of knocking off products it sells on its website and of exploiting its vast trove of internal data to promote its own merchandise at the expense of other sellers. The company has denied the accusations.
But thousands of pages of internal Amazon documents examined by Reuters – including emails, strategy papers and business plans – show the company ran a systematic campaign of creating knockoffs and manipulating search results to boost its own product lines in India, one of the company’s largest growth markets.
The documents reveal how Amazon’s private-brands team in India secretly exploited internal data from Amazon.in to copy products sold by other companies, and then offered them on its platform. The employees also stoked sales of Amazon private-brand products by rigging Amazon’s search results so that the company’s products would appear, as one 2016 strategy report for India put it, “in the first 2 or three … search results” when customers were shopping on Amazon.in.
Among the victims of the strategy: a popular shirt brand in India, John Miller, which is owned by a company whose chief executive is Kishore Biyani, known as the country’s “retail king.” Amazon decided to “follow the measurements of” John Miller shirts down to the neck circumference and sleeve length, the document states.
An Israeli researcher has demonstrated that LAN cables’ radio frequency emissions can be read by using a $30 off-the-shelf setup, potentially opening the door to fully developed cable-sniffing attacks.
Mordechai Guri of Israel’s Ben Gurion University of the Negev described the disarmingly simple technique to The Register, which consists of putting an ordinary radio antenna up to four metres from a category 6A Ethernet cable and using an off-the-shelf software defined radio (SDR) to listen around 250MHz.
“From an engineering perspective, these cables can be used as antennas and used for RF transmission to attack the air-gap,” said Guri.
His experimental technique consisted of slowing UDP packet transmissions over the target cable to a very low speed and then transmitting single letters of the alphabet. The cable’s radiations could then be picked up by the SDR (in Guri’s case, both an R820T2-based tuner and a HackRF unit) and, via a simple algorithm, be turned back into human-readable characters.
Nicknamed LANtenna, Guri’s technique is an academic proof of concept and not a fully fledged attack that could be deployed today. Nonetheless, the research shows that poorly shielded cables have the potential to leak information which sysadmins may have believed were secure or otherwise air-gapped from the outside world.
He added that his setup’s $1 antenna was a big limiting factor and that specialised antennas could well reach “tens of metres” of range.
“We could transmit both text and binary, and also achieve faster bit-rates,” acknowledged Guri when El Reg asked about the obvious limitations described in his paper [PDF]. “However, due to environmental noises (e.g. from other cables) higher bit-rate are rather theoretical and not practical in all scenarios.”
When asked in July, 2020, by US Representative Pramila Jayapal (D-WA) whether Amazon ever mined data from its third-party vendors to launch competing products, founder and then CEO Jeff Bezos said he couldn’t answer “yes” or “no,” but insisted Amazon had rules disallowing the practice.
“What I can tell you is we have a policy against using seller-specific data to aid our private label business but I can’t guarantee that policy has never been violated,” Bezos said.
According to documents obtained by Reuters, Amazon’s employees in India flouted that policy by copying the products of Amazon marketplace sellers for its in-house brands and then manipulating search results on Amazon’s website to place its knockoffs at the top of search results lists.
“The documents reveal how Amazon’s private-brands team in India secretly exploited internal data from Amazon.in to copy products sold by other companies, and then offered them on its platform,” said Reuters reporters Aditya Kalra and Steve Stecklow in a report published on Wednesday.
“The employees also stoked sales of Amazon private-brand products by rigging Amazon’s search results so that the company’s products would appear, as one 2016 strategy report for India put it, ‘in the first 2 or three … search results’ when customers were shopping on Amazon.in.”
Last year, the Wall Street Journal published similar allegations that the company used third-party merchant data to develop competing products, which prompted Rep. Jayapal’s question to Bezos. Such claims are central to the ongoing antitrust investigations of Amazon being conducted in the US, Europe, and India.
Load up the website This Person Does Not Exist and it’ll show you a human face, near-perfect in its realism yet totally fake. Refresh and the neural network behind the site will generate another, and another, and another. The endless sequence of AI-crafted faces is produced by a generative adversarial network (GAN) — a type of AI that learns to produce realistic but fake examples of the data it is trained on. But such generated faces — which are starting to be used in CGI movies and ads — might not be as unique as they seem. In a paper titled This Person (Probably) Exists (PDF), researchers show that many faces produced by GANs bear a striking resemblance to actual people who appear in the training data. The fake faces can effectively unmask the real faces the GAN was trained on, making it possible to expose the identity of those individuals. The work is the latest in a string of studies that call into doubt the popular idea that neural networks are “black boxes” that reveal nothing about what goes on inside.
To expose the hidden training data, Ryan Webster and his colleagues at the University of Caen Normandy in France used a type of attack called a membership attack, which can be used to find out whether certain data was used to train a neural network model. These attacks typically take advantage of subtle differences between the way a model treats data it was trained on — and has thus seen thousands of times before — and unseen data. For example, a model might identify a previously unseen image accurately, but with slightly less confidence than one it was trained on. A second, attacking model can learn to spot such tells in the first model’s behavior and use them to predict when certain data, such as a photo, is in the training set or not.
Such attacks can lead to serious security leaks. For example, finding out that someone’s medical data was used to train a model associated with a disease might reveal that this person has that disease. Webster’s team extended this idea so that instead of identifying the exact photos used to train a GAN, they identified photos in the GAN’s training set that were not identical but appeared to portray the same individual — in other words, faces with the same identity. To do this, the researchers first generated faces with the GAN and then used a separate facial-recognition AI to detect whether the identity of these generated faces matched the identity of any of the faces seen in the training data. The results are striking. In many cases, the team found multiple photos of real people in the training data that appeared to match the fake faces generated by the GAN, revealing the identity of individuals the AI had been trained on.
This platform, which is called Kinetic Soul, uses Posenet computer vision to track a dancer’s movements. Posenet detects the dancer’s joints and creates a point map to determine what body parts are moving where, and at what speed. Then the system translates and transmits the movements to the 32 pins on the surface, creating a touchable picture of what’s going on. Each 3D-printed pin is controlled with a solenoid, all of which are driven by a single Arduino.
We think it’s interesting that Kinetic Soul can speak to the user in two different languages. The first is more about the overall flow of a dance, and the second delves into the deconstructed details. Both methods allow for dances to be enjoyed in real time, or via video recording. So how does one deconstruct dance? [Shi Yun] turned to Laban Movement Analysis, which breaks up human locomotion into four broad categories: the body in relation to itself, the effort expended to move, the shapes assumed, and the space used.
[Shi Yun] has been user-testing their ideas at dance workshops for the visually impaired throughout the entire process — this is how they arrived at having two haptic languages instead of one. They plan to continue getting input as they work to fortify the prototype, improve the touch experience, and refine the haptic languages. Check out the brief demonstration video after the break.
Yes indeed, dance is a majestic way of expressing all kinds of things. Think you have no use for interpretive dance? Think again — it can help you understand protein synthesis in an amusing way.
Daily exposure to phthalates, a group of chemicals used in everything from plastic containers to makeup, may lead to approximately 100,000 deaths in older Americans annually, a study from New York University warned Tuesday.
The chemicals, which can be found in hundreds of products such as toys, clothing and shampoo, have been known for decades to be “hormone disruptors,” affecting a person’s endocrine system.
The toxins can enter the body through such items and are linked to obesity, diabetes and heart disease, said the study published in the journal Environmental Pollution.
The research, which was carried out by New York University’s Grossman School of Medicine and includes some 5,000 adults aged 55 to 64, shows that those with higher concentrations of phthalates in their urine were more likely to die of heart disease.
It’s not a jailbreak, but [basti564]’s Oculess software nevertheless allows one the option to remove telemetry and account dependencies from Facebook’s Oculus Quest VR headsets. It is not normally possible to use these devices without a valid Facebook account (or a legacy Oculus account in the case of the original Quest), so the ability to flip any kind of disconnect switch without bricking the hardware is a step forward, even if there are a few caveats to the process.
To be clear, the Quest devices still require normal activation and setup via a Facebook account. But once that initial activation is complete, Oculess allows one the option of disabling telemetry or completely disconnecting the headset from its Facebook account.
A woman allegedly hacked into the systems of a flight training school in Florida to delete and tamper with information related to the school’s airplanes. In some cases, planes that previously had maintenance issues had been “cleared” to fly, according to a police report. The hack, according to the school’s CEO, could have put pilots in danger.
Lauren Lide, a 26-year-old who used to work for the Melbourne Flight Training school, resigned from her position of Flight Operations Manager at the end of November of 2019, after the company fired her father. Months later, she allegedly hacked into the systems of her former company, deleting and changing records, in an apparent attempt to get back at her former employer, according to court records obtained by Motherboard.
A new study by a team of university researchers in the UK has unveiled a host of privacy issues that arise from using Android smartphones.
The researchers have focused on Samsung, Xiaomi, Realme, and Huawei Android devices, and LineageOS and /e/OS, two forks of Android that aim to offer long-term support and a de-Googled experience
The conclusion of the study is worrying for the vast majority of Android users .
With the notable exception of /e/OS, even when minimally configured and the handset is idle these vendor-customized Android variants transmit substantial amounts of information to the OS developer and also to third parties (Google, Microsoft, LinkedIn, Facebook, etc.) that have pre-installed system apps. – Researchers.
As the summary table indicates, sensitive user data like persistent identifiers, app usage details, and telemetry information are not only shared with the device vendors, but also go to various third parties, such as Microsoft, LinkedIn, and Facebook.
Summary of collected data
Source: Trinity College Dublin
And to make matters worse, Google appears at the receiving end of all collected data almost across the entire table.
No way to “turn it off”
It is important to note that this concerns the collection of data for which there’s no option to opt-out, so Android users are powerless against this type of telemetry.
This is particularly concerning when smartphone vendors include third-party apps that are silently collecting data even if they’re not used by the device owner, and which cannot be uninstalled.
For some of the built-in system apps like miui.analytics (Xiaomi), Heytap (Realme), and Hicloud (Huawei), the researchers found that the encrypted data can sometimes be decoded, putting the data at risk to man-in-the-middle (MitM) attacks.
Volume of data (KB/h) transmitted by each vendor
Source: Trinity College Dublin
As the study points out, even if the user resets the advertising identifiers for their Google Account on Android, the data-collection system can trivially re-link the new ID back to the same device and append it to the original tracking history..
The deanonymisation of users takes place using various methods, such as looking at the SIM, IMEI, location data history, IP address, network SSID, or a combination of these.
Potential cross-linking data collection points
Source: Trinity College Dublin
Privacy-conscious Android forks like /e/OS are getting more traction as increasing numbers of users realize that they have no means to disable the unwanted functionality in vanilla Android and seek more privacy on their devices.
However, the majority of Android users remain locked into never ending stream of data collection, which is where regulators and consumer protection organizations need to step in and to put an end to this.
Gael Duval, the creator of /e/OS has told BleepingComputer:
Today, more people understand that the advertising model that is fueling the mobile OS business is based on the industrial capture of personal data at a scale that has never been seen in history, at the world level. This has negative impacts on many aspects of our lives, and can even threaten democracy as seen in recent cases. I think regulation is needed more than ever regarding personal data protection. It has started with the GDPR, but it’s not enough and we need to switch to a “privacy by default” model instead of “privacy as an option”.
Update – A Google spokesperson has provided BleepingComputer the following comment on the findings of the study:
While we appreciate the work of the researchers, we disagree that this behavior is unexpected – this is how modern smartphones work. As explained in our Google Play Services Help Center article, this data is essential for core device services such as push notifications and software updates across a diverse ecosystem of devices and software builds. For example, Google Play services uses data on certified Android devices to support core device features. Collection of limited basic information, such as a device’s IMEI, is necessary to deliver critical updates reliably across Android devices and apps.
Epic Games CEO Tim Sweeney, whose high-profile antitrust lawsuit against Apple is now under appeal, is today calling out the iPhone maker for giving itself access to an advertising slot its competitors don’t have: the iPhone’s Settings screen. Some iOS 15 users noticed Apple is now advertising its own services at the top of their Settings, just below their Apple ID. The services being suggested are personalized to the device owner, based on which ones they already subscribe to, it appears.
For example, those without an Apple Music subscription may see an ad offering a free six-month trial. However, current Apple Music subscribers may instead see a prompt to add on a service they don’t yet have, like AppleCare coverage for their devices.
Sweeney suggests this sort of first-party advertising is an anticompetitive risk for Apple, as some of the services it’s pushing here are those that directly compete with third-party apps published on its App Store. But those third-party apps can’t gain access to the iPhone’s Settings screen, of course — they can only bid for ad slots within the App Store itself.
Writes Sweeney: “New from the guys who banned Fortnite: settings-screen ads for their own music service, which come before the actual settings, and which aren’t available to other advertisers like Spotify or Sound Cloud.”
A developer who created a browser extension designed to help Facebook users reduce their time spent on the platform says that the company responded by banning him and threatening to take legal action.
Louis Barclay says he created Unfollow Everything to help people enjoy Facebook more, not less. His extension, which no longer exists, allowed users to automatically unfollow everybody on their FB account, thus eliminating the newsfeed feature, one of the more odious, addictive parts of the company’s product. The feed, which allows for an endless barrage of targeted advertising, is powered by follows, not friends, so even without it, users can still visit the profiles they want to and navigate the site like normal.
The purpose of bucking the feed, Barclay says, was to allow users to enjoy the platform in a more balanced, targeted fashion, rather than being blindly coerced into constant engagement by Facebook’s algorithms.
How did Facebook reward Barclay for trying to make its user experience less toxic? Well, first it booted him off of all of its platforms—locking him out of his Facebook and Instagram accounts. Then, it sent him a cease and desist letter, threatening legal action if he didn’t shut the browser extension down. Ultimately, Barclay said he was forced to do so, and Unfollow Everything no longer exists. He recently wrote about his experience in an op-ed for Slate, saying:
If someone built a tool that made Facebook less addictive—a tool that allowed users to benefit from Facebook’s positive features while limiting their exposure to its negative ones—how would Facebook respond?
I know the answer, because I built the tool, and Facebook squashed it.
England’s National Data Guardian has warned that government plans to allow data sharing between NHS bodies and the police could “erode trust and confidence” in doctors and other healthcare providers.
The bill, set to go through the House of Lords this month, could force NHS bodies such as commissioning groups to share data with police and other specified authorities to prevent and reduce serious violence in their local areas.
Dr Byrne said the proposed law could “erode trust and confidence, and deter people from sharing information, and even from presenting for clinical care.”
Meanwhile, the bill [PDF] did not detail what information it would cover, she said. “The case isn’t made as to why that is necessary. These things need to be debated openly and in public.”
In a blog published last week, Dr Byrne said the bill imposes a duty on clinical groups in the NHS to disclose information to police without breaching any obligation of patient confidentiality.
“Whilst tackling serious violence is important, it is essential that the risks and harms that this new duty pose to patient confidentiality, and thereby public trust, are engaged with and addressed,” she said.
Microsoft said its Azure cloud service mitigated a 2.4 terabytes per second (Tbps) distributed denial of service attack this year, at the end of August, representing the largest DDoS attack recorded to date.
Amir Dahan, Senior Program Manager for Azure Networking, said the attack was carried out using a botnet of approximately 70,000 bots primarily located across the Asia-Pacific region, such as Malaysia, Vietnam, Taiwan, Japan, and China, as well as the United States.
Dahan identified the target of the attack only as “an Azure customer in Europe.”
The Microsoft exec said the record-breaking DDoS attack came in three short waves, in the span of ten minutes, with the first at 2.4 Tbps, the second at 0.55 Tbps, and the third at 1.7 Tbps.
Dahan said Microsoft successfully mitigated the attack without Azure going down.
Prior to Microsoft’s disclosure today, the previous DDoS record was held by a 2.3 Tbps attack that Amazon’s AWS division mitigated in February 2020.
Dahan said the largest DDoS attack that hit Azure prior to the August attack was a 1 Tbps attack the company saw in Q3 2020, while this year, Azure didn’t see a DDoS attack over 625 Mbps all year.
Record for largest volumetric DDoS attack broken days later too
Just days after Microsoft mitigated this attack, a botnet called Meris broke another DDoS record — the record for the largest volumetric DDoS attack.
According to Qrator Labs, the operators of the Meris botnet launched a DDoS attack of 21.8 million requests per second (RPS) in early September. Sources told The Record last month that the attack targeted a Russian bank that was hosting its e-banking portal on Yandex Cloud servers.
It is unclear if the Meris botnet was behind the attack detected and mitigated by Microsoft in August. An Azure spokesperson did not return a request for comment.
Another day, another massive privacy breach nobody will do much about. This time it’s Neiman Marcus, which issued a statement indicating that the personal data of roughly 4.6 million U.S. consumers was exposed thanks to a previously undisclosed data breach that occurred last year. According to the company, the data exposed included login in information, credit card payment information, virtual gift card numbers, names, addresses, and the security questions attached to Neiman Marcus accounts. The company is, as they always are in the wake of such breaches, very, very sorry:
“At Neiman Marcus Group, customers are our top priority,” said Geoffroy van Raemdonck, Chief Executive Officer. “We are working hard to support our customers and answer questions about their online accounts. We will continue to take actions to enhance our system security and safeguard information.”
As is par for the course for this kind of stuff, the actual breach is likely much worse than what’s first being reported here. And by the time the full scope of the breach becomes clear, the press will have largely lost interest. The company set up a website for those impacted to get more information. In this case, impacted consumers didn’t even get free credit reporting, the standard mea culpa hand out after these kinds of events (which is worthless since consumers have received free credit reporting for countless hacks and leaks over the last five to ten years).
A US judge has temporarily blocked a new law in Texas that effectively bans women from having an abortion.
District Judge Robert Pitman granted a request by the Biden administration to prevent any enforcement of the law while its legality is being challenged.
The law, which prohibits women in Texas from obtaining an abortion after six weeks of pregnancy, was drafted and approved by Republican politicians.
The White House praised the latest ruling as an important step.
“The fight has only just begun, both in Texas and in many states across this country where women’s rights are currently under attack,” White House Press Secretary Jen Psaki said.
Texan officials immediately appealed against the ruling, setting the stage for further court battles.
Judge Pitman, of Austin, wrote in an 113-page opinion that, from the moment the law came into effect on 1 September, “women have been unlawfully prevented from exercising control over their lives in ways that are protected by the Constitution”.
“This court will not sanction one more day of this offensive deprivation of such an important right,” he said on Wednesday.
Whole Woman’s Health, which runs a number of clinics in Texas, said it was making plans to resume abortions “as soon as possible”.
But the anti-abortion group Texas Right to Life, accused judges of “catering to the abortion industry” and called for a “fair hearing” at the next stage.