Google must delete search results about you if they’re fake, EU court rules

People in Europe can get Google to delete search results about them if they prove the information is “manifestly inaccurate,” the EU’s top court ruled Thursday.

The case kicked off when two investment managers requested Google to dereference results of a search made on the basis of their names, which provided links to certain articles criticising that group’s investment model. They say those articles contain inaccurate claims.

Google refused to comply, arguing that it was unaware whether the information contained in the articles was accurate or not.

But in a ruling Thursday, the Court of Justice of the European Union opened the door to the investment managers being able to successfully trigger the so-called “right to be forgotten” under the EU’s General Data Protection Regulation.

“The right to freedom of expression and information cannot be taken into account where, at the very least, a part – which is not of minor importance – of the information found in the referenced content proves to be inaccurate,” the court said in a press release accompanying the ruling.

People who want to scrub inaccurate results from search engines have to provide sufficient proof that what is said about them is false. But it doesn’t have to come from a court case against a publisher, for instance. They have “to provide only evidence that can reasonably be required of [them] to try to find,” the court said.

[…]

Source: Google must delete search results about you if they’re fake, EU court rules – POLITICO

Telegram is auctioning phone numbers to let users sign up to the service without any SIM

After putting unique usernames on the auction on the TON blockchain, Telegram is now putting anonymous numbers up for bidding. These numbers could be used to sign up for Telegram without needing any SIM card.

Just like the username auction, you can buy these virtual numbers on Fragment, which is a site specially created for Telegram-related auctions. To buy a number, you will have to link your TON wallet (Tonkeeper) to the website.

You can buy a random number for as low as 9 toncoins, which is equivalent to roughly $16.50 at the time of writing. Some of the premium virtual numbers — such as +888-8-888 — are selling for 31,500 toncoins (~$58,200).

Notably, you can only use this number to sign up for Telegram. You can’t use it to receive SMS or calls or use it to register for another service.

For Telegram, this is another way of asking its most loyal supporters to support the app by helping it make some money. The company launched its premium subscription plan earlier this year. On Tuesday, the chat app’s founder Pavel Durov said that Telegram has more than 1 million paid users just a few months after the launch of its premium features. While Telegram offers features like cross-device sync and large groups, it’s important to remember that chats are not protected by end-to-end encryption.

As for folks who want anonymization, Telegram already offers you to hide your phone number. Alternatively, there are tons of virtual phone number services out there — including Google Voice, Hushed, and India-based Doosra — that allow you receive calls and SMS as well.

Source: Telegram is auctioning phone numbers to let users sign up to the service without any SIM

Grad Students Analyze, Hack, and Remove Under-Desk Surveillance Devices Designed to Track Them – at  a privacy institute!

[…]

graduate students at Northeastern University were able to organize and beat back an attempt at introducing invasive surveillance devices that were quietly placed under desks at their school.

Early in October, Senior Vice Provost David Luzzi installed motion sensors under all the desks at the school’s Interdisciplinary Science & Engineering Complex (ISEC), a facility used by graduate students and home to the “Cybersecurity and Privacy Institute” which studies surveillance. These sensors were installed at night—without student knowledge or consent—and when pressed for an explanation, students were told this was part of a study on “desk usage,” according to a blog post by Max von Hippel, a Privacy Institute PhD candidate who wrote about the situation for the Tech Workers Coalition’s newsletter.

[…]

In response, students began to raise concerns about the sensors, and an email was sent out by Luzzi attempting to address issues raised by students.

[…]

“The results will be used to develop best practices for assigning desks and seating within ISEC (and EXP in due course).”

To that end, Luzzi wrote, the university had deployed “a Spaceti occupancy monitoring system” that would use heat sensors at groin level to “aggregate data by subzones to generate when a desk is occupied or not.” Luzzi added that the data would be anonymized, aggregated to look at “themes” and not individual time at assigned desks, not be used in evaluations, and not shared with any supervisors of the students. Following that email, an impromptu listening session was held in the ISEC.

At this first listening session, Luzzi asked that grad student attendees “trust the university since you trust them to give you a degree,” Luzzi also maintained that “we are not doing any science here” as another defense of the decision to not seek IRB approval.

“He just showed up. We’re all working, we have paper deadlines and all sorts of work to do. So he didn’t tell us he was coming, showed up demanding an audience, and a bunch of students spoke with him,”

[…]

After that, the students at the Privacy Institute, which specialize in studying surveillance and reversing its harm, started removing the sensors, hacking into them, and working on an open source guide so other students could do the same. Luzzi had claimed the devices were secure and the data encrypted, but Privacy Institute students learned they were relatively insecure and unencrypted.

[…]

After hacking the devices, students wrote an open letter to Luzzi and university president Joseph E. Aoun asking for the sensors to be removed because they were intimidating, part of a poorly conceived study, and deployed without IRB approval even though human subjects were at the center of the so-called study.

“Resident in ISEC is the Cybersecurity and Privacy Institute, one of the world’s leading groups studying privacy and tracking, with a particular focus on IoT devices,” the letter reads. “To deploy an under-desk tracking system to the very researchers who regularly expose the perils of these technologies is, at best, an extremely poor look for a university that routinely touts these researchers’ accomplishments.

[…]

Another listening session followed, this time for professors only, and where Luzzi claimed the devices were not subject to IRB approval because “they don’t sense humans in particular – they sense any heat source.” More sensors were removed afterwards and put into a “public art piece” in the building lobby spelling out NO!

[…]

Afterwards, von Hippel took to Twitter and shares what becomes a semi-viral thread documenting the entire timeline of events from the secret installation of the sensors to the listening session occurring that day. Hours later, the sensors are removed

[…]

This was a particularly instructive episode because it shows that surveillance need not be permanent—that it can be rooted out by the people affected by it, together.

[…]

“The most powerful tool at the disposal of graduate students is the ability to strike. Fundamentally, the university runs on graduate students.

[…]

“The computer science department was able to organize quickly because almost everybody is a union member, has signed a card, and are all networked together via the union. As soon as this happened, we communicated over union channels.

[…]

This sort of rapid response is key, especially as more and more systems adopt sensors for increasingly spurious or concerning reasons. Sensors have been rolled out at other universities like Carnegie Mellon University, as well as public school systems. They’ve seen use in more militarized and carceral settings such as the US-Mexico border or within America’s prison system.

These rollouts are part of what Cory Doctrow calls the “shitty technology adoption curve” whereby horrible, unethical and immoral technologies are normalized and rationalized by being deployed on vulnerable populations for constantly shifting reasons. You start with people whose concerns can be ignored—migrants, prisoners, homeless populations—then scale it upwards—children in school, contractors, un-unionized workers. By the time it gets to people whose concerns and objections would be the loudest and most integral to its rejection, the technology has already been widely deployed.

[…]

Source: ‘NO’: Grad Students Analyze, Hack, and Remove Under-Desk Surveillance Devices Designed to Track Them

As US, UK Embrace ‘Age Verify Everyone!’ French Data Protection Agency Says Age Verification Is Unreliable And Violates Privacy Rights

[…]

We’ve already spent many, many words explaining how age verification technology is inherently dangerous and actually puts children at greater risk. Not to mention it’s a privacy nightmare that normalizes the idea of mass surveillance, especially for children.

But, why take our word for it?

The French data protection agency, CNIL, has declared that no age verification technology in existence can be deemed as safe and not dangerous to privacy rights.

Now, there are many things that I disagree with CNIL about, especially its views that the censorial “right to be forgotten in the EU” should be applied globally. But one thing we likely agree on is that CNIL does not fuck around when it comes to data protection stuff. CNIL is generally seen as the most aggressive and most thorough in its data protection/data privacy work. Being on the wrong side of CNIL is a dangerous place for any company to be.

So I’d take it seriously when CNIL effectively notes that all age verification is a privacy nightmare, especially for children:

The CNIL has analysed several existing solutions for online age verification, checking whether they have the following properties: sufficiently reliable verification, complete coverage of the population and respect for the protection of individuals’ data and privacy and their security.

The CNIL finds that there is currently no solution that satisfactorily meets these three requirements.

Basically, CNIL found that all existing age verification techniques are unreliable, easily bypassed, and are horrible regarding privacy.

Despite this, CNIL seems oddly optimistic that just by nerding harder, perhaps future solutions will magically work. However, it does go through the weaknesses and problems of the various offerings being pushed today as solutions. For example, you may recall that when I called out the dangers of the age verification in California’s Age Appropriate Design Code, a trade group representing age verification companies reached out to me to let me know there was nothing to worry about, because they’d just scan everyone’s faces to visit websites. CNIL points out some, um, issues with this:

The use of such systems, because of their intrusive aspect (access to the camera on the user’s device during an initial enrolment with a third party, or a one-off verification by the same third party, which may be the source of blackmail via the webcam when accessing a pornographic site is requested), as well as because of the margin of error inherent in any statistical evaluation, should imperatively be conditional upon compliance with operating, reliability and performance standards. Such requirements should be independently verified.

This type of method must also be implemented by a trusted third party respecting precise specifications, particularly concerning access to pornographic sites. Thus, an age estimate performed locally on the user’s terminal should be preferred in order to minimise the risk of data leakage. In the absence of such a framework, this method should not be deployed.

Every other verification technique seems to similarly raise questions about effectiveness and how protective (or, well, how not protective it is of privacy rights).

So… why isn’t this raising alarm bells among the various legislatures and children’s advocates (many of whom also claim to be privacy advocates) who are pushing for these laws?

Source: As US, UK Embrace ‘Age Verify Everyone!’ French Data Protection Agency Says Age Verification Is Unreliable And Violates Privacy Rights | Techdirt

Players are boycotting Nintendo and Panda events in the wake of Smash Bros tournaments being instacanceled by Nintendo

n the wake of Nintendo being Nintendo and unceremoniously canceling the Smash World Tour, one of the year’s biggest esports tournaments dedicated to all things Super Smash Bros., copious folks in the game’s community have come out in protest. Casual fans, pro players, long-time commentators, and even other tournament organizers, from AITX eSports to Beyond the Summit, have all publicly denounced not just Nintendo for its asinine decision but also Panda Global for allegedly causing the Smash World Tour to get shut down. Now, it appears many of those people are boycotting all of Nintendo’s officially licensed tournaments as well.

[…]

Super Smash Bros. fans aren’t happy about what’s going on, with many posting their frustrations on Twitter. Some pointed fingers at Panda Global CEO and co-founder Dr. Alan Bunney for allegedly trying to recruit tournaments to the Panda Cup by threatening to get Nintendo involved to shut the Smash World Tour down and reportedly attempting to create a monopoly by requesting exclusive streaming rights to the Panda Cup. Others fear this may hurt their careers and livelihoods. The main consensus is to never watch, support, or attend a Panda Global event ever again. A lot of people seem to feel this way.

[…]

The future of Super Smash Bros.’s competitive fighting game scene is looking quite precarious, with Video Game Boot Camp admitting in the statement that it’s “currently navigating budget cuts, internal communications with our team and partners, commitments/contracts, as well as sponsorship negotiations that will inevitably be affected by all of this.” It’s possible that smaller tournaments will continue without Nintendo’s blessing, but, as has been done time and again, it’s likely only a matter of time until Nintendo comes a-knocking.

[…]

Source: Smash Bros. Fans Are Totally Done With Nintendo And Tournaments

The article says that Smash Bros tournaments were cancelled due to Nintendo not sponsoring them, but the tournaments were cancelled due to Nintendo throwing cease and desist letters at the organisers. Also see: Nintendo Shuts Down Smash World Tour – worlds largest e-sports tournament – out of the blue

Telegram shares users’ data in copyright violation lawsuit to Indian court

Telegram has disclosed names of administrators, their phone numbers and IP addresses of channels accused of copyright infringement in compliance with a court order in India in a remarkable illustration of the data the instant messaging platform stores on its users and can be made to disclose by authorities.

The app operator was forced by a Delhi High Court order to share the data after a teacher sued the firm for not doing enough to prevent unauthorised distribution of her course material on the platform. Neetu Singh, the plaintiff teacher, said a number of Telegram channels were re-selling her study materials at discounted prices without permission.

An Indian court earlier had ordered Telegram to adhere to the Indian law and disclose details about those operating such channels.

Telegram unsuccessfully argued that disclosing user information would violate the privacy policy and the laws of Singapore, where it has located its physical servers for storing users’ data. In response, the Indian court said the copyright owners couldn’t be left “completely remediless against the actual infringers” because Telegram has chosen to locate its servers outside the country.

In an order last week, Justice Prathiba Singh said Telegram had complied with the earlier order and shared the data.

“Let copy of the said data be supplied to Id. Counsel for plaintiffs with the clear direction that neither the plaintiffs nor their counsel shall disclose the said data to any third party, except for the purposes of the present proceedings. To this end, disclosure to the governmental authorities/police is permissible,” said the court (PDF) and first reported by LiveLaw.

[…]

Source: Telegram shares users’ data in copyright violation lawsuit | TechCrunch

Eufy Cameras Have Been Uploading Unencrypted Face Footage to Cloud

Eufy, the company behind a series of affordable security cameras I’ve previously suggested over the expensive stuff, is currently in a bit of hot water for its security practices. The company, owned by Anker, purports its products to be one of the few security devices that allow for locally-stored media and don’t need a cloud account to work efficiently. But over the turkey-eating holiday, a noted security researcher across the pond discovered a security hole in Eufy’s mobile app that threatens that whole premise.

Paul Moore relayed the issue in a tweeted screengrab. Moore had purchased the Eufy Doorbell Dual Camera for its promise of a local storage option, only to discover that the doorbell’s cameras had been storing thumbnails of faces on the cloud, along with identifiable user information, despite Moore not even having a Eufy Cloud Storage account.

After Moore tweeted the findings, another user found that the data uploaded to Eufy wasn’t even encrypted. Any uploaded clips could be easily played back on any desktop media player, which Moore later demonstrated. What’s more: thumbnails and clips were linked to their partner cameras, offering additional identifiable information to any digital snoopers sniffing around.

Android Central was able to recreate the issue on its own with a EufyCam 3. It then reached out to Eufy, which explained to the site why this issue was cropping up. If you choose to have a motion notification pushed out with an attached thumbnail, Eufy temporarily uploads that file to its AWS servers to send it out.

[…]

Unfortunately, this isn’t the first time Eufy has had an issue regarding security on its cameras. Last year, the company faced similar reports of “unwarranted access” to random camera feeds, though the company quickly fixed the issue once it was discovered. Eufy is no stranger to patching things up.

Source: Eufy Cameras Have Been Uploading Unencrypted Footage to Cloud

Why first upload these images to AWS instead of directly mailing them?!

Nintendo Shuts Down Smash World Tour – worlds largest e-sports tournament – out of the blue

The organisers of the Smash World Tour have today announced that they are being shut down after Nintendo, “without any warning”, told them they could “no longer operate”.

The Tour, which is run by a third party (since Nintendo has been so traditionally bad at this), had grown over the years to become one of the biggest in the esports and fighting game scene. As the SWT team say:

In 2022 alone, we connected over 6,400 live events worldwide, with over 325,000 in-person entrants, making the Smash World Tour (SWT, or the Tour) the largest esports tour in history, for any game title. The Championships would also have had the largest prize pool in Smash history at over $250,000. The 2023 Smash World Tour planned to have a prize pool of over $350,000.

That’s all toast, though, because organisers now say “Without any warning, we received notice the night before Thanksgiving from Nintendo that we could no longer operate”. While Nintendo has yet to comment—we’ve reached out to the company (UPDATE: see comment at bottom of post)—Nintendo recently teamed up with Panda to run a series of competing, officially-licensed Smash events.

While this will be a disappointment to SWT’s organisers, fans and players, it has also placed the team in a huge financial hole, since so many bookings and plans for the events had already been made. As they say in the cancellation announcement:

We don’t know where everything will land quite yet with contracts, sponsor obligations, etc — in short, we will be losing hundreds of thousands of dollars due to Nintendo’s actions. That being said, we are taking steps to remedy many issues that have arisen from canceling the upcoming Smash World Tour Championships — Especially for the players. Please keep an eye out in the coming days for help with travel arrangements. Given the timeline that we were forced into, we had to publish this statement before we could iron out all of the details. All attendees will be issued full refunds.

The move blindsided the SWT team who had believed, after years of friction, they were starting to make some progress with Nintendo:

In November 2021, after the Panda Cup was first announced, Nintendo contacted us to jump on a call with a few folks on their team, including a representative from their legal team. We truly thought we might be getting shut down given the fact that they now had a licensed competing circuit and partner in Panda.

Once we joined the call, we were very surprised to hear just the opposite.

Nintendo reached out to us to let us know that they had been watching us build over the years, and wanted to see if we were interested in working with them and pursuing a license as well. They made it clear that Panda’s partnership was not exclusive, and they said it had “not gone unnoticed” that we had not infringed on their IP regarding game modifications and had represented Nintendo’s values well. They made it clear that game modifications were their primary concern in regards to “coming down on events”, which also made sense to us given their enforcement over the past few years in that regard.

That lengthy conversation changed our perspective on Nintendo at a macro level; it was incredibly refreshing to talk to multiple senior team members and clear the air on a lot of miscommunications and misgivings in the years prior. We explained why so many in the community were hesitant to reach out to Nintendo to work together, and we truly believed Nintendo was taking a hard look at their relationship with the community, and ways to get involved in a positive manner.

Guess not! In addition to Nintendo now stipulating that tournaments could only run with an official license—something SWT had not been successful applying for—the team also allege that Panda went around undermining them to the organisers of individual events (the World Tour would have been an umbrella linking these together), and that while Nintendo continued saying nice things to their faces, Panda had told these grassroost organisers that the Smash World Tour was definitely getting shut down, which made them reluctant to come onboard.

You can read the full announcement here, which goes into a lot more detail, and closes with an appeal “that Nintendo reconsiders how it is currently proceeding with their relationship with the Smash community, as well as its partners”.

UPDATE 12:16am ET, November 30: A Nintendo spokesperson tells Kotaku:

Unfortunately after continuous conversations with Smash World Tour, and after giving the same deep consideration we apply to any potential partner, we were unable to come to an agreement with SWT for a full circuit in 2023. Nintendo did not request any changes to or cancellation of remaining events in 2022, including the 2022 Championship event, considering the negative impact on the players who were already planning to participate.

UPDATE 2 1:51am ET, November 30: SWT’s oragnizers have disputed Nintendo’s statement, issuing a follow-up of their own which reads:

We did not expect to have to address this, but Nintendo’s response via Kotaku has been brought to our attention:

“Unfortunately after continuous conversations with Smash World Tour, and after giving the same deep consideration we apply to any potential partner, we were unable to come to an agreement with SWT for a full circuit in 2023. Nintendo did not request any changes to or cancellation of remaining events in 2022, including the 2022 Championship event, considering the negative impact on the players who were already planning to participate.”

We are unsure why they are taking this angle, especially in light of the greater statement and all that it contains.

To reiterate from the official statement:

“As a last ditch effort, we asked if we could continue running the Championships and the Tour next year without a license, and shift our focus to working with them in 2024. We alluded to how the last year functioned in that capacity, with a mutual understanding that we would not get shut down and focus on the future. We were told directly that those times were now over. This was the final nail in the coffin given our very particular relationship with Nintendo. This is when we realized it truly was all being shut down for real. We asked if they understood the waves that would be made if we were forced to cancel, and Nintendo communicated that they were indeed aware.”

To be clear, we asked Nintendo multiple times if they had considered the implications of canceling the Championships as well as next year’s Tour. They affirmed that they had considered all variables.

We received this statement in writing from Nintendo shortly after our call:

“It is Nintendo’s expectation that an approved license be secured in order to operate any commercial activity featuring Nintendo IP. It is also expected to secure such a license well in advance of any public announcement. After further review, we’ve found that the Smash World Tour has not met these expectations around health & safety guidelines and has not adhered to our internal partner guidelines. Nintendo will not be able to grant a license for the Smash World Tour Championship 2022 or any Smash World Tour activity in 2023.”

To be clear, we did not even submit an application for 2023 yet, the license application was for the 2022 Championships (submitted in April). Nintendo including all 2023 activity was an addition we were not even expecting. In our call that accompanied the statement, we asked multiple times if we would be able to continue to operate without a license as we had in years past with the same “unofficial” understanding with Nintendo. We were told point blank that those “times are over.” They followed up the call with their statement in writing, again confirming both the 2022 Championships and all 2023 activity were in the exact same boat.

Source: Nintendo Shuts Down Smash World Tour ‘Without Any Warning’

Mercedes locks faster acceleration behind a yearly $1,200 subscription – the car can already go faster, they slowed you down

Mercedes is the latest manufacturer to lock auto features behind a subscription fee, with an upcoming “Acceleration Increase” add-on that lets drivers pay to access motor performance their vehicle is already capable of.

The $1,200 yearly subscription improves performance by boosting output from the motors by 20–24 percent, increasing torque, and shaving around 0.8 to 0.9 seconds off 0–60 mph acceleration when in Dynamic drive mode (via The Drive). The subscription doesn’t come with any physical hardware upgrades — instead, it simply unlocks the full capabilities of the vehicle, indicating that Mercedes intentionally limited performance to later sell as an optional extra. Acceleration Increase is only available for the Mercedes-EQ EQE and Mercedes-EQ EQS electric car models.

[…]

This comes just months after BMW sparked outrage by similarly charging an $18 monthly subscription in some countries for owners to use the heated seats already installed within its vehicles, just one of many features paywalled by the car manufacturer since 2020. BMW had previously also tried (and failed) to charge its customers $80 a month to access Apple CarPlay and Android Auto — features that other vehicle makers have included for free.

Source: Mercedes locks faster acceleration behind a yearly $1,200 subscription – The Verge

So they are basically saying you don’t really own the product you spent around $100 000,- to buy.

Google Settles 40 States’ Location Data Suit for only $392 Million

Google agreed to a $391.5 million dollar settlement on Monday to end a lawsuit accusing the tech giant of tricking users with location data privacy settings that didn’t actually turn off data collection. The payout, the result of a suit brought by 40 state attorneys general, marks one of the biggest privacy settlements in history. Google also promised to make additional changes to clarify its location tracking practices next year.

“For years Google has prioritized profit over their users’ privacy,” said Ellen Rosenblum, Oregon’s attorney general who co-lead the case, in a press release. “They have been crafty and deceptive. Consumers thought they had turned off their location tracking features on Google, but the company continued to secretly record their movements and used that information for advertisers.”

[…]

The attorneys’ investigation into Google and subsequent lawsuit came after a 2018 report that found Google’s Location History setting didn’t stop the company’s location tracking, even though the setting promised that “with Location History off, the places you go are no longer stored.” Google quickly updated the description of its settings, clarifying that you actually have to turn off a completely different setting called Web & App Activity if you want the company to stop following you around.

[…]

Despite waves of legal and media attention, Google’s location settings are still confusing, according to experts in interface design. The fine print makes it clear that you need to change multiple settings if you don’t want Google collecting data about everywhere you go, but you have to read carefully. It remains to be seen how clearly the changes the company promised in the settlement will communicate its data practices.

[…]

 

Source: Google Settles 40 States’ Location Data Suit for $392 Million

Apple Vanquishes Evil YouTube Account Full Of Old Apple WWDC Videos

Many of you are likely to be familiar with WWDC, Apple’s Worldwide Developer Conference. This is one of those places where you get a bunch of Apple product reveals and news updates that typically result in the press tripping all over themselves to bow at the altar of an iPhone 300 or whatever. The conference has been going on for decades and one enterprising YouTube account made a point of archiving video footage from past events so that any interested person could go back and see the evolution of the company.

Until now, that is, since Apple decided to copyright-strike Brendan Shanks account to hell.

 

Now, he’s going to be moving the videos over to the Internet Archive, but that will take time and I suppose there’s nothing keeping Apple from turning its copyright guns to that site as well. In the meantime, this treasure trove of videos that Apple doesn’t seem to want to bother hosting itself is simply gone.

Now, did Shanks have permission from Apple to post those videos? He says no. Does that mean that Apple can take copyright action on them? Sure does! But why is the question. Why are antiquated videos interesting mostly to hobbyists worth all this chaos and bad PR?

The videos in question were decades-old recordings of WWDC events.

Due to the multiple violations, not only were the videos removed, but Shanks’ YouTube channel has been disabled. In addition to losing the archive, Shanks also lost his personal YouTube account, as well as his YouTube TV, which he’d just paid for.

And so here we are again, with a large company killing off a form of preservation effort in the name of draconian copyright enforcement. Good times.

Source: Apple Vanquishes Evil YouTube Account Full Of Old Apple WWDC Videos | Techdirt

Apple Apps Track You Even With Privacy Protections on – and they hoover a LOT

For all of Apple’s talk about how private your iPhone is, the company vacuums up a lot of data about you. iPhones do have a privacy setting that is supposed to turn off that tracking. According to a new report by independent researchers, though, Apple collects extremely detailed information on you with its own apps even when you turn off tracking, an apparent direct contradiction of Apple’s own description of how the privacy protection works.

The iPhone Analytics setting makes an explicit promise. Turn it off, and Apple says that it will “disable the sharing of Device Analytics altogether.” However, Tommy Mysk and Talal Haj Bakry, two app developers and security researchers at the software company Mysk, took a look at the data collected by a number of Apple iPhone apps—the App Store, Apple Music, Apple TV, Books, and Stocks. They found the analytics control and other privacy settings had no obvious effect on Apple’s data collection—the tracking remained the same whether iPhone Analytics was switched on or off.

[…]

The App Store appeared to harvest information about every single thing you did in real time, including what you tapped on, which apps you search for, what ads you saw, and how long you looked at a given app and how you found it. The app sent details about you and your device as well, including ID numbers, what kind of phone you’re using, your screen resolution, your keyboard languages, how you’re connected to the internet—notably, the kind of information commonly used for device fingerprinting.

“Opting-out or switching the personalization options off did not reduce the amount of detailed analytics that the app was sending,” Mysk said. “I switched all the possible options off, namely personalized ads, personalized recommendations, and sharing usage data and analytics.”

[…]

Most of the apps that sent analytics data shared consistent ID numbers, which would allow Apple to track your activity across its services, the researchers found.

[…]

In the App Store, for example, the fact that you’re looking at apps related to mental health, addiction, sexual orientation, and religion can reveal things that you might not want sent to corporate servers.

It’s impossible to know what Apple is doing with the data without the company’s own explanation, and as is so often the case, Apple has been silent so far

[…]

You can see what the data looks like for yourself in the video Mysk posted to Twitter, documenting the information collected by the App Store:

The App Store on your iPhone is watching your every move

This isn’t an every-app-is-tracking-me-so-what’s-one-more situation. These findings are out of line with standard industry practices, Mysk says. He and his research partner ran similar tests in the past looking at analytics in Google Chrome and Microsoft Edge. In both of those apps, Mysk says the data isn’t sent when analytics settings are turned off.

[…]

Source: Apple Apps Track You Even With Privacy Protections on: Report

Senator Wyden Asks State Dept. To Explain Why It’s Handing Out ‘Unfettered’ Access To Americans’ Passport Data

[…]

In 2018, a blockbuster report detailed the actions of CBP agent Jeffrey Rambo. Rambo apparently took it upon himself to track down whistleblowers and leakers. To do this, he cozied up to a journalist and leveraged the wealth of data on travelers collected by federal agencies in hopes of sniffing out sources.

A few years later, another report delved deeper into the CPB and Rambo’s actions. This reporting — referencing a still-redacted DHS Inspector General’s report — showed the CBP routinely tracked journalists (as well as activists and immigration lawyers) via a national counter-terrorism database. This database was apparently routinely queried for reasons unrelated to national security objectives and the information obtained was used to open investigations targeting journalists.

That report remains redacted nearly a year later. But Senator Ron Wyden is demanding answers from the State Department about its far too cozy relationship with other federal agencies, including the CBP.

The State Department is giving law enforcement and intelligence agencies unrestricted access to the personal data of more than 145 million Americans, through information from passport applications that is shared without legal process or any apparent oversight, according to a letter sent from Sen. Ron Wyden to Secretary of State Antony Blinken and obtained by Yahoo News.

The information was uncovered by Wyden during his ongoing probe into reporting by Yahoo News about Operation Whistle Pig, a wide-ranging leak investigation launched by a Border Patrol agent and his supervisors at the U.S. Customs and Border Protection’s National Targeting Center.

On Wednesday, Wyden sent a letter to Blinken requesting detailed information on which federal agencies are provided access to State Department passport information on U.S. citizens.

The letter [PDF] from Wyden points out that the State Department is giving “unfettered” access to at least 25 federal agencies, including DHS components like the CBP. The OIG report into “Operation Whistle Pig” (the one that remains redacted) details Agent Rambo’s actions. Subsequent briefings by State Department officials provided more details that are cited in Wyden’s letter.

More than 25 agencies, but the State Department has, so far refused to identify them.

Department officials declined to identify the specific agencies, but said that both law enforcement and intelligence agencies can access the [passport application] database. They further stated that, while the Department is not legally required to provide other agencies with such access, the Department has done so without requiring these other agencies to obtain compulsory legal process, such as a subpoena or court order.

Sharing is caring, the State Department believes. However, it cannot explain why it feels this passport application database should be an open book to whatever government agencies seek access to it. This is unacceptable, says Senator Wyden. Citing the “clear abuses” by CBP personnel detailed in the Inspector General’s report, Wyden is demanding details the State Department has so far refused to provide, like which agencies have access and the number of times these agencies have accessed the Department’s database.

Why? Because rights matter, no matter what the State Department and its beneficiaries might think.

The Department’s mission does include providing dozens of other government agencies with self-service access to 145 million American’s personal data. The Department has voluntarily taken on this role, and in doing so, prioritized the interests of other agencies over those of law-abiding Americans

That’s the anger on behalf of millions expressed by Senator Wyden. There are also demands. Wyden not only wants answers, he wants changes. He has instructed the State Department to put policies in place to ensure the abuses seen in “Operation Whistle Pig” do not reoccur. He also says the Department should notify Americans when their passport application info is accessed or handed over to government agencies. Finally, he instructs the Department to provide annual statistics on outside agency access to the database, so Americans can better understand who’s going after their data.

So, answers and changes, things federal agencies rarely enjoy engaging with. The answers are likely to be long in coming. The requested changes, even more so. But at least this drags the State Department’s dirty laundry out into the daylight, which makes it a bit more difficult for the Department to continue to ignore a problem it hasn’t addressed for more than three years.

Source: Senator Wyden Asks State Dept. To Explain Why It’s Handing Out ‘Unfettered’ Access To Americans’ Passport Data | Techdirt

Dutch foundation launches mass privacy claim against Twitter – DutchNews.nl

A Dutch foundation is planning to take legal action against social media platform Twitter for illegally collecting and trading in personal details gathered via free apps such as Duolingo and Wordfeud as well as dating apps and weather forecaster Buienradar. Twitter owned advertising platform MoPub between 2013 and January 2022 and that is where the problem lies, the SDBN foundation says. It estimates 11 million people’s information may have been illegally gathered and sold. Between 2013 and 2021, MoPub had access to information gleaned via 30,000 free apps on smartphones and tablets, the foundation says. In essence, the foundation says, consumers ‘paid with their privacy’ without giving permission.

The foundation is demanding compensation on behalf of the apps’ users and if Twitter refuses to pay, the foundation will start a legal case against the company.

Source: Dutch foundation launches mass privacy claim against Twitter – DutchNews.nl

Also Shazam was busy with this – that’s an Apple company. It’s pretty disturbing that this kind of news isn’t a surprise at all any more.

But who is SDBN to collect for Dutch people? I don’t recall them starting up a class action for people to subscribe to and I doubt they will be dividing the money out to the Dutch people either.

Greece To Ban Sale of Spyware After Government Is Accused of Surveillance of opposition party leader

Prime Minister Kyriakos Mitsotakis has announced that Greece would ban the sale of spyware, after his government was accused in a news report of targeting dozens of prominent politicians, journalists and businessmen for surveillance, and the judicial authorities began an investigation. From a report: The announcement is the latest chapter in a scandal that erupted over the summer, when Mr. Mitsotakis conceded that Greece’s state intelligence service had been monitoring an opposition party leader with a traditional wiretap last year. That revelation came after the politician discovered that he had also been targeted with a spyware program known as Predator.

The Greek government said the wiretap was legal but never specified the reasons for it, and Mr. Mitsotakis said it was done without his knowledge. The government has also asserted that it does not own or use the Predator spyware, and has insisted that the simultaneous targeting with a wiretap and Predator was a coincidence.

Source: Greece To Ban Sale of Spyware After Government Is Accused of Surveillance – Slashdot

Microsoft’s GitHub Copilot Sued Over ‘Software Piracy on an Unprecedented Scale’

“Microsoft’s GitHub Copilot is being sued in a class action lawsuit that claims the AI product is committing software piracy on an unprecedented scale,” reports IT Pro.

Programmer/designer Matthew Butterick filed the case Thursday in San Francisco, saying it was on behalf of millions of GitHub users potentially affected by the $10-a-month Copilot service: The lawsuit seeks to challenge the legality of GitHub Copilot, as well as OpenAI Codex which powers the AI tool, and has been filed against GitHub, its owner Microsoft, and OpenAI…. “By training their AI systems on public GitHub repositories (though based on their public statements, possibly much more), we contend that the defendants have violated the legal rights of a vast number of creators who posted code or other work under certain open-source licences on GitHub,” said Butterick.

These licences include a set of 11 popular open source licences that all require attribution of the author’s name and copyright. This includes the MIT licence, the GNU General Public Licence, and the Apache licence. The case claimed that Copilot violates and removes these licences offered by thousands, possibly millions, of software developers, and is therefore committing software piracy on an unprecedented scale.

Copilot, which is entirely run on Microsoft Azure, often simply reproduces code that can be traced back to open-source repositories or licensees, according to the lawsuit. The code never contains attributions to the underlying authors, which is in violation of the licences. “It is not fair, permitted, or justified. On the contrary, Copilot’s goal is to replace a huge swath of open source by taking it and keeping it inside a GitHub-controlled paywall….” Moreover, the case stated that the defendants have also violated GitHub’s own terms of service and privacy policies, the DMCA code 1202 which forbids the removal of copyright-management information, and the California Consumer Privacy Act.
The lawsuit also accuses GitHub of monetizing code from open source programmers, “despite GitHub’s pledge never to do so.”

And Butterick argued to IT Pro that “AI systems are not exempt from the law… If companies like Microsoft, GitHub, and OpenAI choose to disregard the law, they should not expect that we the public will sit still.” Butterick believes AI can only elevate humanity if it’s “fair and ethical for everyone. If it’s not… it will just become another way for the privileged few to profit from the work of the many.”

Reached for comment, GitHub pointed IT Pro to their announcement Monday that next year, suggested code fragments will come with the ability to identify when it matches other publicly-available code — or code that it’s similar to.

The article adds that this lawsuit “comes at a time when Microsoft is looking at developing Copilot technology for use in similar programmes for other job categories, like office work, cyber security, or video game design, according to a Bloomberg report.”

Source: Microsoft’s GitHub Copilot Sued Over ‘Software Piracy on an Unprecedented Scale’ – Slashdot

Qualcomm v Arm: The bizarro quotient just went off the scale

[…]

Qualcomm and Arm have been engaged in one of those very entertainingly bitter court fist-fights that the industry throws up when friends fall out over money. Briefly, Qualcomm builds its mobile device chips around Arm, for which it pays Arm a lot of money. Qualcomm bought another Arm-licensed company, Nuvia, and inherited Nuvia’s own Arm deals and derived IP. Arm said ‘Nu-uh, can’t do that.’ And into court they tumbled.

This sort of thing is normally lawyers locking horns over profit. Sometimes, though, it feels more like a fight to the death – and in this case, Qualcomm is making the case that a lot more than the details of per-chip licensing costs are involved. It says that Arm is about to make huge changes to its business model, imposing savage new restrictions on how its IP is used and making all its money from device makers, not chip companies. Which would cut Qualcomm off at the knees, if true.

[…]

The move to license device makers instead of chip makers would be massively complicated for everyone, and would give Arm much more power by not having to negotiate with a few very large concerns but a much more diverse market with many smaller clients. Doubtless the market regulators would be very interested in that, but it’s not quite world-beating suicidal madness.

World-beating suicidal madness comes with the other idea – that Arm would refuse to license a design that didn’t use purely Arm intellectual property. You want a GPU design to go with the CPU? Arm. An AI accelerator? Arm or nothing.

The chip industry has always had a fondness for these sorts of shenanigans, but has known better than to write them down. You want a particular CPU? Terribly sorry, but there’s a really long lead time on that part – unless you also buy the rest of our support chips… then we can do business. It’s unethical, usually illegal, and even the biggest names look the other way when their sales teams do it.

[…]

Source: Qualcomm v Arm: The bizarro quotient just went off the scale • The Register

[…] Qualcomm’s amended response to Arm’s lawsuit against the US chip giant. Arm is right now trying to stop Qualcomm from developing custom Arm-compatible processors using CPU core designs Qualcomm obtained via its acquisition of Nuvia. According to Arm, Qualcomm should have got, and failed to get, Arm’s permission to absorb Nuvia’s technologies, which were derived from Arm-licensed IP.

Qualcomm counterclaimed that Arm tried to demand at least “tens of millions” of dollars in transfer fees and extra royalties for using the newly acquired Nuvia designs.

[…]

Qualcomm states in its filing [PDF] that Arm has signaled it “will no longer license CPU technology to semiconductor companies” once existing agreements expire.

This would be an incredible transformation for Softbank-owned Arm: how exactly would Arm-based chips get into devices if no more Arm technology licenses are issued to chip designers … unless, perhaps, Arm starts making its own chips, which it’s previously said it has no appetite for, or it gets certain chip designers to make pure Arm-designed processors for it, and the makers of the end products using these components get charged a royalty per device.

In response to Qualcomm’s filing, Arm’s veep of external communication Phil Hughes didn’t directly address the allegations about licensing changes, but said the filing is “riddled with inaccuracies, and we will address many of these in our formal legal response that is due in the coming weeks.”

[…]

Thus, Qualcomm is claiming a whole range of manufacturers – from those in the embedded electronics space to personal computing – using Arm-compatible chips may need to directly pay Arm a royalty for every device sold. And if they don’t, they’ll need to shop elsewhere for a system-on-chip architecture, which could be unfortunate for them because Arm has few rivals. In fields like smartphones, few alternatives exist. Ironically, Qualcomm acquired Nuvia to make itself a better alternative to Intel and AMD in laptops.

[…]

The language in Qualcomm’s filing is specific and nuanced. It talks of threats by Arm, and Arm indicating it intends to do certain things. At first read, Qualcomm’s filing appears to state outright that Arm will change its business model; on second read, it appears more that Qualcomm is claiming Arm is threatening it will overhaul its licensing approach – to the detriment of Qualcomm – so as to scare Qualcomm into agreeing to Arm’s terms regarding the Nuvia acquisition and its licensed technologies.

Qualcomm previously complained Arm is trying to steer it onto higher royalty rates, by making it renegotiate its licensing agreements following the acquisition of Nuvia and its Arm-derived technologies.

Meanwhile, no matter how unfair Qualcomm believes Arm has acted, Qualcomm still has to answer Arm’s initial complaint: that Qualcomm transferred Nuvia’s Arm license and Arm-derived technology to itself after the acquisition, whereas the fine print of Nuvia’s agreement with Arm is that any such transfer must be negotiated with Arm, and that Qualcomm allegedly failed to do so and is in breach of contract.

Qualcomm says this assertion is simply wrong.

Whatever happens, this case has the potential to shine a light into some dark corners of the semiconductor industry – and this filing suggests whatever we find down there will be fascinating

Source: Qualcomm: Arm threatens to end CPU licensing, charge device makers instead

Iran’s Secret Manual for Controlling Protesters’ Mobile Phones

As furious anti-government protests swept Iran, the authorities retaliated with both brute force and digital repression. Iranian mobile and internet users reported rolling network blackouts, mobile app restrictions, and other disruptions. Many expressed fears that the government can track their activities through their indispensable and ubiquitous smartphones.

Iran’s tight grip on the country’s connection to the global internet has proven an effective tool for suppressing unrest. The lack of clarity about what technological powers are held by the Iranian government — one of the most opaque and isolated in the world — has engendered its own form of quiet terror for prospective dissidents. Protesters have often been left wondering how the government was able to track down their locations or gain access to their private communications — tactics that are frighteningly pervasive but whose mechanisms are virtually unknown.

While disconnecting broad swaths of the population from the web remains a favored blunt instrument of Iranian state censorship, the government has far more precise, sophisticated tools available as well. Part of Iran’s data clampdown may be explained through the use of a system called “SIAM,” a web program for remotely manipulating cellular connections made available to the Iranian Communications Regulatory Authority. The existence of SIAM and details of how the system works, reported here for the first time, are laid out in a series of internal documents from an Iranian cellular carrier that were obtained by The Intercept.

According to these internal documents, SIAM is a computer system that works behind the scenes of Iranian cellular networks, providing its operators a broad menu of remote commands to alter, disrupt, and monitor how customers use their phones. The tools can slow their data connections to a crawl, break the encryption of phone calls, track the movements of individuals or large groups, and produce detailed metadata summaries of who spoke to whom, when, and where. Such a system could help the government invisibly quash the ongoing protests — or those of tomorrow — an expert who reviewed the SIAM documents told The Intercept.

“SIAM can control if, where, when, and how users can communicate,” explained Gary Miller, a mobile security researcher and fellow at the University of Toronto’s Citizen Lab. “In this respect, this is not a surveillance system but rather a repression and control system to limit the capability of users to dissent or protest.”

[…]

Based on the manuals, SIAM offers an effortless way to throttle a phone’s data speeds, one of roughly 40 features included in the program. This ability to downgrade users’ speed and network quality is particularly pernicious because it can not only obstruct one’s ability to use their phone, but also make whatever communication is still possible vulnerable to interception.

Referred to within SIAM as “Force2GNumber,” the command allows a cellular carrier to kick a given phone off substantially faster, more secure 3G and 4G networks and onto an obsolete and extremely vulnerable 2G connection. Such a network downgrade would simultaneously render a modern smartphone largely useless and open its calls and texts to interception

[…]

downgrading users to a 2G connection could also expose perilously sensitive two-factor authentication codes delivered to users through SMS.

[…]

SIAM also provides a range of tools to track the physical locations of cell users, allowing authorities to both follow an individual’s movements and identify everyone present at a given spot. Using the “LocationCustomerList” command allows SIAM operators to see what phone numbers have connected to specified cell towers along with their corresponding IMEI number, a unique string of numbers assigned to every mobile phone in the world. “For example,” Miller said, “if there is a location where a protest is occurring, SIAM can provide all of the phone numbers currently at that location.”

SIAM’s tracking of unique device identifiers means that swapping SIM cards, a common privacy-preserving tactic, may be ineffective in Iran since IMEI numbers persist even with a new SIM

[…]

user data accessible through SIAM includes the customer’s father’s name, birth certificate number, nationality, address, employer, billing information, and location history, including a record of Wi-Fi networks and IP addresses from which the user has connected to the internet.

[…]

SIAM allows its operators to learn a great deal not just about where a customer has been, but also what they’ve been up to, a bounty of personal data that, Miller said, “can enable CRA to create a social network/profile of the user based on his/her communication with other people.”

By entering a particular phone number and the command “GetCDR” into SIAM, a system user can generate a comprehensive Call Detail Record, including the date, time, duration, location, and recipients of a customer’s phone calls during a given time period. A similar rundown can be conducted for internet usage as well using the “GetIPDR” command, which prompts SIAM to list the websites and other IP addresses a customer has connected to, the time and date these connections took place, the customer’s location, and potentially the apps they opened. Such a detailed record of internet usage could also reveal users running virtual private networks, which are used to cover a person’s internet trail by routing their traffic through an encrypted connection to an outside server. VPNs — including some banned by the government — have become tremendously popular in Iran as a means of evading domestic web censorship.

Though significantly less subtle than being forced onto a 2G network, SIAM can also be used to entirely pull the plug on a customer’s device at will. Through the “ApplySuspIp” command, the system can entirely disconnect any mobile phone on the network from the internet for predetermined lengths of time or permanently. Similar commands would let SIAM block a user from placing or receiving calls.

[…]

 

Source: Iran’s Secret Manual for Controlling Protesters’ Mobile Phones

Meta fined measly $24.6m over political ad non disclosure and disinformation

Despite warnings of Chinese and Russian mischief and manipulation ahead of the US midterm elections, it seems American companies and citizens are perfectly capable of denting democracy on their own.

A Washington judge fined Meta $24.6 million this week after ruling that Facebook intentionally broke [PDF] the state’s campaign finance transparency laws 822 times. This fine was the maximum amount, we’re told, and represents the largest-ever penalty of its kind in the US.

To put the fine in perspective: it’s about half a day of Meta’s quarterly profits, which in these uncertain economic times dropped to $4.4 billion for Q3 this year.

In addition to paying the pocket change, Meta was ordered [PDF] by the judge to reimburse the Washington state attorney general’s costs, and noted these fees should be tripled “as punitive damages for Meta’s intentional violations of state law.”

While the exact amount hasn’t been determined, Attorney General Bob Ferguson said that legal bill totals $10.5 million for Facebook’s “arrogance.” Again, pocket change.

“It intentionally disregarded Washington’s election transparency laws. But that wasn’t enough,” Ferguson said. “Facebook argued in court that those laws should be declared unconstitutional. That’s breathtaking.”

The state requires internet outfits like Meta that display political ads on their websites and in their apps to keep records on these campaigns and make these details publicly available. This includes the cost of the advert and who paid for it along with information on which users were targeted and how far the ads reached.

Meta, which at the time was known as Facebook, repeatedly failed to do this, denying netizens details of who was pushing political ads on them. Specifically, the tech giant did not “maintain and make available for public inspection books of account and related materials” regarding the political ads, according to court documents [PDF] filed in 2020.

[…]

So-called “pink-slime newsrooms” — hyper-partisan publications that are dressed up as independent regional media — are spending millions of dollars on Facebook and Instagram ad campaigns in battleground states in the lead-up to America’s November midterm elections, a NewsGuard Misinformation Monitor found. These ads either push netizens to obviously left or right-leaning articles, or are snippets of articles contained within the ad.

Four of these outlets, some backed by Republican and others Democratic donors, have collectively spent $3.94 million on ad campaigns running simultaneously on Meta’s platforms so far in 2022, according to an investigation by the media trust org. The ad content or the articles they link to are at best highly partisan, and at worse play fast and loose with the truth to push a point. The goal, it seems, is to get people fired up enough to vote for one particular side, while appearing to be published by a normal media operation rather than a political campaign.

[…]

Their strategy seems to work, too. One of the publishers, Courier Newsroom, in an August 2022 case study, touted spending $49,000 on Facebook ads targeting 12 Iowa counties ahead of the state’s June 2022 primary election. The political spending resulted in 3,300 more votes, which NewsGuard suggested likely went to Democrats.

[…]

 

Source: Meta fined record-breaking $24.6m over political ads • The Register

Anti-Cheat Software Continues To Be The New DRM In Pissing Off Legit Customers

[…]

if you’ve been paying attention over the last couple of years, anti-cheat software is quickly becoming the new DRM. Access to root layers of the computer complaints, complaints about performance effects, complaints about how the software tracks customer behavior, and now finally we have the good old “software isn’t letting me play my game” type of complaint. This revolves around Kotaku’s Luke Plunkett, whose writing I’ve always found valuable, attempting to review EA’s latest FIFA game.

I have reviewed FIFA in some capacity on this website for well over a decade, but regular readers who are also football fans may have noticed I haven’t said a word about it this year. That’s because, over a month after the PC version’s release, I am still locked out of it thanks to a broken, over-zealous example of anti-cheat protection.

Publisher EA uses Easy Anti-Cheat, which has given me an error preventing me from even launching the game that every published workaround—from running the program as an administrator to disabling overlays (?) to editing my PC’s bios (??!!)—hasn’t solved. And so for one whole month, a game that I own and have never cheated at in my life, remains unplayable. I’ve never even made it to the main menu.

Well, gosh golly gee, that sure seems like a problem. And Plunkett isn’t your average FIFA customer. He’s a professional in the gaming journalism space and has reviewed a metric ton of games in the past. If he can’t get into the game due to this anti-cheat software, what hope does the average gamer have?

He goes on to note that FIFA isn’t the only game with this problem. EA also published Battlefield 2042, which Plunkett notes at least lets him boot into the game menu and allows him to play the game for a few minutes before it freezes up entirely. The same anti-cheat software appears to be the issue there as well.

Now, console gamers may chalk this all up to the perils of PC gaming. But that is, frankly, bullshit. This isn’t a hardware problem. It’s a publisher and software problem.

[…]

there’s certainly cheating going on in these games, but it seems like the anti-cheat software is the one cheating customers out of the games they bought.

Source: Anti-Cheat Software Continues To Be The New DRM In Pissing Off Legit Customers | Techdirt

Did PayPal Just Reintroduce Its $2,500 ‘Misinformation’ / ‘I disagree with you’ / ‘I find this offensive’ Fine, Hoping We Wouldn’t Notice?

“On October 8th, PayPal updated its terms of service agreement to include a clause enabling it to withdraw $2,500 from users’ bank accounts simply for posting anything the company deems as misinformation or offensive,” reports Grit Daily. “Unsurprisingly, the backlash was instant and massive,” causing the company to backtrack on the policy and claim the update was sent out “in error.” Now, after the criticism on social media died down, several media outlets are reporting that the company quietly reinstated the questionable misinformation fine — even though that itself may be a bit of misinformation. From a report: Apparently, they believed that everyone would just accept their claim and immediately forget about the incident. So the clause that was a mistake and was never intended to be included in PayPal’s terms of service magically ended up back in there once the criticism died back down. That sounds plausible, right? And as for what constitutes a “violation” of the company’s terms of service, the language is so vaguely worded that it could encompass literally anything.

The term “other forms of intolerance” is so broad that it legally gives the company grounds to claim that anyone not fully supporting any particular position is engaging in “intolerance” because the definition of the word is the unwillingness to accept views, beliefs, or behavior that differ from one’s own. So essentially, this clause gives PayPal the perceived right to withdraw $2,500 from users accounts for voicing opinions that PayPal disagrees with. As news of PayPal’s most recent revision spreads, I anticipate that the company’s PR disaster will grow, and with numerous competing payment platforms available today, this could deliver a devastating and well deserved blow to the company. UPDATE: According to The Deep Dive, citing Twitter user Kelley K, PayPal “never removed the $2,500 fine. It’s been there for over a year. All they removed earlier this month was a new section that mentioned misinformation.”

She goes on to highlight the following:

1.) [T]he $2,500 fine has been there since September 2021.
2.) PayPal did remove what was originally item number 5 of the Prohibited Activities annex, the portion that contained the questionable “promoting misinformation” clause that the company claims was an “error.”
3.) [T]he other portion, item 2.f. which includes “other forms of intolerance that is discriminatory,” which some have pointed out may also be dangerous as the language is vague, has always been there since the policy was updated, and not recently added.

PayPal’s user agreement can be read here.

Source: Did PayPal Just Reintroduce Its $2,500 ‘Misinformation’ Fine, Hoping We Wouldn’t Notice? – Slashdot

Google’s Privacy Settings Finally Won’t Break It’s Apps Anymore, require using My Ad Center

[…] It used to be that the only way to prevent Google from using your data for targeted ads was turning off personalized ads across your whole account, or disabling specific kinds of data using a couple of settings, including Web & App Activity and YouTube History. Those two settings control whether Google collects certain details about what you do on its platform (you can see some of that data here). Turning off the controls meant Google wouldn’t use the data for ads, but it disabled some of the most useful features on services such as Maps, Search, and Google Assistant.

Thanks to a new set of controls, that’s no longer true. You can now leave Web & App Activity and YouTube History on, but drill into to adjust more specific settings to tell Google you don’t want the related data used for targeted ads.

The detail is tucked into an announcement about the rollout of a new hub for Google’s advertising settings called My Ad Center. “You can decide what types of your Google activity are used to show you ads, without impacting your experience with the utility of the product,” Jerry Dischler, vice president of ads at Google, wrote in a blog post.

That’s a major step in the direction of what experts call “usable privacy,” or data protection that’s easy to manage without breaking other parts of the internet.

[…]

You’ll find the new controls in My Ad Center, which starts rolling out to users this week. It primarily serves as a hub for Google’s existing ad controls, but you’ll find some expanded options, new tools, and a number of other updates.

When you open My Ad Center, you’ll be able to fine tune whether you see ads related to certain subjects or advertisers. […] You’ll also be able to view ads and advertisers that you’ve seen recently, and see all the ads that specific advertisers have run over the last thirty days.

Google also includes a way to toggle off ads on sensitive subjects such as alcohol, parenting, and weight loss. Unlike similar settings on Facebook and Instagram, though, you can’t tell Google you don’t want to see ads about politics.

Source: Google’s Privacy Settings Finally Won’t Break It’s Apps Anymore

So you probably need to spend quite some time configuring this – we will see, but most importantly you are now directly telling Google what you do and don’t like (and what you don’t like tells them about what you do like) without them having to feed your search behaviour through an algorithm and making them guess at how to best /– mind control –/ sell ads to you

Texas sues Google for allegedly capturing biometric data of millions without consent

Texas has filed a lawsuit against Alphabet’s (GOOGL.O) Google for allegedly collecting biometric data of millions of Texans without obtaining proper consent, the attorney general’s office said in a statement on Thursday.

The complaint says that companies operating in Texas have been barred for more than a decade from collecting people’s faces, voices or other biometric data without advanced, informed consent.

“In blatant defiance of that law, Google has, since at least 2015, collected biometric data from innumerable Texans and used their faces and their voices to serve Google’s commercial ends,” the complaint said. “Indeed, all across the state, everyday Texans have become unwitting cash cows being milked by Google for profits.”

The collection occurred through products like Google Photos, Google Assistant, and Nest Hub Max, the statement said.

[…]

Source: Texas sues Google for allegedly capturing biometric data of millions without consent | Reuters

Ring Cameras Are Being Used To Control and Surveil Overworked Delivery Workers

Networked doorbell surveillance cameras like Amazon’s Ring are everywhere, and have changed the nature of delivery work by letting customers take on the role of bosses to monitor, control, and discipline workers, according to a recent report (PDF) by the Data & Society tech research institute. “The growing popularity of Ring and other networked doorbell cameras has normalized home and neighborhood surveillance in the name of safety and security,” Data & Society’s Labor Futures program director Aiha Nguyen and research analyst Eve Zelickson write. “But for delivery drivers, this has meant their work is increasingly surveilled by the doorbell cameras and supervised by customers. The result is a collision between the American ideas of private property and the business imperatives of doing a job.”

Thanks to interviews with surveillance camera users and delivery drivers, the researchers are able to dive into a few major developments interacting here to bring this to a head. Obviously, the first one is the widespread adoption of doorbell surveillance cameras like Ring. Just as important as the adoption of these cameras, however, is the rise of delivery work and its transformation into gig labor. […] As the report lays out, Ring cameras allow customers to surveil delivery workers and discipline their labor by, for example, sharing shaming footage online. This dovetails with the “gigification” of Amazon’s delivery workers in two ways: labor dynamics and customer behavior.

“Gig workers, including Flex drivers, are sold on the promise of flexibility, independence and freedom. Amazon tells Flex drivers that they have complete control over their schedule, and can work on their terms and in their space,” Nguyen and Zelickson write. “Through interviews with Flex drivers, it became apparent that these marketed perks have hidden costs: drivers often have to compete for shifts, spend hours trying to get reimbursed for lost wages, pay for wear and tear on their vehicle, and have no control over where they work.” That competition between workers manifests in other ways too, namely acquiescing to and complying with customer demands when delivering purchases to their homes. Even without cameras, customers have made onerous demands of Flex drivers even as the drivers are pressed to meet unrealistic and dangerous routes alongside unsafe and demanding productivity quotas. The introduction of surveillance cameras at the delivery destination, however, adds another level of surveillance to the gigification. […] The report’s conclusion is clear: Amazon has deputized its customers and made them partners in a scheme that encourages antagonistic social relations, undermines labor rights, and provides cover for a march towards increasingly ambitious monopolistic exploits. As Nguyen and Zelickson point out, it is ingenious how Amazon has “managed to transform what was once a labor cost (i.e., supervising work and asset protection) into a revenue stream through the sale of doorbell cameras and subscription services to residents who then perform the labor of securing their own doorstep.”

Source: Ring Cameras Are Being Used To Control and Surveil Overworked Delivery Workers – Slashdot

TikTok joins Uber, Facebook in Monitoring The Physical Location Of Specific American Citizens

The team behind the monitoring project — ByteDance’s Internal Audit and Risk Control department — is led by Beijing-based executive Song Ye, who reports to ByteDance cofounder and CEO Rubo Liang.

The team primarily conducts investigations into potential misconduct by current and former ByteDance employees. But in at least two cases, the Internal Audit team also planned to collect TikTok data about the location of a U.S. citizen who had never had an employment relationship with the company, the materials show. It is unclear from the materials whether data about these Americans was actually collected; however, the plan was for a Beijing-based ByteDance team to obtain location data from U.S. users’ devices.

[…]

material reviewed by Forbes indicates that ByteDance’s Internal Audit team was planning to use this location information to surveil individual American citizens, not to target ads or any of these other purposes. Forbes is not disclosing the nature and purpose of the planned surveillance referenced in the materials in order to protect sources.

[…]

The Internal Audit and Risk Control team runs regular audits and investigations of TikTok and ByteDance employees, for infractions like conflicts of interest and misuse of company resources, and also for leaks of confidential information. Internal materials reviewed by Forbes show that senior executives, including TikTok CEO Shou Zi Chew, have ordered the team to investigate individual employees, and that it has investigated employees even after they left the company.

[…]

ByteDance is not the first tech giant to have considered using an app to monitor specific U.S. users. In 2017, the New York Times reported that Uber had identified various local politicians and regulators and served them a separate, misleading version of the Uber app to avoid regulatory penalties. At the time, Uber acknowledged that it had run the program, called “greyball,” but said it was used to deny ride requests to “opponents who collude with officials on secret ‘stings’ meant to entrap drivers,” among other groups.

[…]

Both Uber and Facebook also reportedly tracked the location of journalists reporting on their apps. A 2015 investigation by the Electronic Privacy Information Center found that Uber had monitored the location of journalists covering the company. Uber did not specifically respond to this claim. The 2021 book An Ugly Truth alleges that Facebook did the same thing, in an effort to identify the journalists’ sources. Facebook did not respond directly to the assertions in the book, but a spokesperson told the San Jose Mercury News in 2018 that, like other companies, Facebook “routinely use[s] business records in workplace investigations.”

[…]

https://www.forbes.com/sites/emilybaker-white/2022/10/20/tiktok-bytedance-surveillance-american-user-data/

So a bit of anti China stirring, although it’s pretty sad that nowadays this kind of surveillance by tech companies has been normalised by the us govt refusing to punish it