The Linkielist

Linking ideas with the world

The Linkielist

Ring Cameras Are Being Used To Control and Surveil Overworked Delivery Workers

Networked doorbell surveillance cameras like Amazon’s Ring are everywhere, and have changed the nature of delivery work by letting customers take on the role of bosses to monitor, control, and discipline workers, according to a recent report (PDF) by the Data & Society tech research institute. “The growing popularity of Ring and other networked doorbell cameras has normalized home and neighborhood surveillance in the name of safety and security,” Data & Society’s Labor Futures program director Aiha Nguyen and research analyst Eve Zelickson write. “But for delivery drivers, this has meant their work is increasingly surveilled by the doorbell cameras and supervised by customers. The result is a collision between the American ideas of private property and the business imperatives of doing a job.”

Thanks to interviews with surveillance camera users and delivery drivers, the researchers are able to dive into a few major developments interacting here to bring this to a head. Obviously, the first one is the widespread adoption of doorbell surveillance cameras like Ring. Just as important as the adoption of these cameras, however, is the rise of delivery work and its transformation into gig labor. […] As the report lays out, Ring cameras allow customers to surveil delivery workers and discipline their labor by, for example, sharing shaming footage online. This dovetails with the “gigification” of Amazon’s delivery workers in two ways: labor dynamics and customer behavior.

“Gig workers, including Flex drivers, are sold on the promise of flexibility, independence and freedom. Amazon tells Flex drivers that they have complete control over their schedule, and can work on their terms and in their space,” Nguyen and Zelickson write. “Through interviews with Flex drivers, it became apparent that these marketed perks have hidden costs: drivers often have to compete for shifts, spend hours trying to get reimbursed for lost wages, pay for wear and tear on their vehicle, and have no control over where they work.” That competition between workers manifests in other ways too, namely acquiescing to and complying with customer demands when delivering purchases to their homes. Even without cameras, customers have made onerous demands of Flex drivers even as the drivers are pressed to meet unrealistic and dangerous routes alongside unsafe and demanding productivity quotas. The introduction of surveillance cameras at the delivery destination, however, adds another level of surveillance to the gigification. […] The report’s conclusion is clear: Amazon has deputized its customers and made them partners in a scheme that encourages antagonistic social relations, undermines labor rights, and provides cover for a march towards increasingly ambitious monopolistic exploits. As Nguyen and Zelickson point out, it is ingenious how Amazon has “managed to transform what was once a labor cost (i.e., supervising work and asset protection) into a revenue stream through the sale of doorbell cameras and subscription services to residents who then perform the labor of securing their own doorstep.”

Source: Ring Cameras Are Being Used To Control and Surveil Overworked Delivery Workers – Slashdot

TikTok joins Uber, Facebook in Monitoring The Physical Location Of Specific American Citizens

The team behind the monitoring project — ByteDance’s Internal Audit and Risk Control department — is led by Beijing-based executive Song Ye, who reports to ByteDance cofounder and CEO Rubo Liang.

The team primarily conducts investigations into potential misconduct by current and former ByteDance employees. But in at least two cases, the Internal Audit team also planned to collect TikTok data about the location of a U.S. citizen who had never had an employment relationship with the company, the materials show. It is unclear from the materials whether data about these Americans was actually collected; however, the plan was for a Beijing-based ByteDance team to obtain location data from U.S. users’ devices.

[…]

material reviewed by Forbes indicates that ByteDance’s Internal Audit team was planning to use this location information to surveil individual American citizens, not to target ads or any of these other purposes. Forbes is not disclosing the nature and purpose of the planned surveillance referenced in the materials in order to protect sources.

[…]

The Internal Audit and Risk Control team runs regular audits and investigations of TikTok and ByteDance employees, for infractions like conflicts of interest and misuse of company resources, and also for leaks of confidential information. Internal materials reviewed by Forbes show that senior executives, including TikTok CEO Shou Zi Chew, have ordered the team to investigate individual employees, and that it has investigated employees even after they left the company.

[…]

ByteDance is not the first tech giant to have considered using an app to monitor specific U.S. users. In 2017, the New York Times reported that Uber had identified various local politicians and regulators and served them a separate, misleading version of the Uber app to avoid regulatory penalties. At the time, Uber acknowledged that it had run the program, called “greyball,” but said it was used to deny ride requests to “opponents who collude with officials on secret ‘stings’ meant to entrap drivers,” among other groups.

[…]

Both Uber and Facebook also reportedly tracked the location of journalists reporting on their apps. A 2015 investigation by the Electronic Privacy Information Center found that Uber had monitored the location of journalists covering the company. Uber did not specifically respond to this claim. The 2021 book An Ugly Truth alleges that Facebook did the same thing, in an effort to identify the journalists’ sources. Facebook did not respond directly to the assertions in the book, but a spokesperson told the San Jose Mercury News in 2018 that, like other companies, Facebook “routinely use[s] business records in workplace investigations.”

[…]

https://www.forbes.com/sites/emilybaker-white/2022/10/20/tiktok-bytedance-surveillance-american-user-data/

So a bit of anti China stirring, although it’s pretty sad that nowadays this kind of surveillance by tech companies has been normalised by the us govt refusing to punish it

iOS 16 VPN Tunnels Leak Data, Even When Lockdown Mode Is Enabled

AmiMoJo shares a report from MacRumors: iOS 16 continues to leak data outside an active VPN tunnel, even when Lockdown mode is enabled, security researchers have discovered. Speaking to MacRumors, security researchers Tommy Mysk and Talal Haj Bakry explained that iOS 16’s approach to VPN traffic is the same whether Lockdown mode is enabled or not. The news is significant since iOS has a persistent, unresolved issue with leaking data outside an active VPN tunnel.

According to a report from privacy company Proton, an iOS VPN bypass vulnerability had been identified in iOS 13.3.1, which persisted through three subsequent updates. Apple indicated it would add Kill Switch functionality in a future software update that would allow developers to block all existing connections if a VPN tunnel is lost, but this functionality does not appear to prevent data leaks as of iOS 15 and iOS 16. Mysk and Bakry have now discovered that iOS 16 communicates with select Apple services outside an active VPN tunnel and leaks DNS requests without the user’s knowledge.

Mysk and Bakry also investigated whether iOS 16’s Lockdown mode takes the necessary steps to fix this issue and funnel all traffic through a VPN when one is enabled, and it appears that the exact same issue persists whether Lockdown mode is enabled or not, particularly with push notifications. This means that the minority of users who are vulnerable to a cyberattack and need to enable Lockdown mode are equally at risk of data leaks outside their active VPN tunnel. […] Due to the fact that iOS 16 leaks data outside the VPN tunnel even where Lockdown mode is enabled, internet service providers, governments, and other organizations may be able to identify users who have a large amount of traffic, potentially highlighting influential individuals. It is possible that Apple does not want a potentially malicious VPN app to collect some kinds of traffic, but seeing as ISPs and governments are then able to do this, even if that is what the user is specifically trying to avoid, it seems likely that this is part of the same VPN problem that affects iOS 16 as a whole

https://m.slashdot.org/story/405931

Android Leaks Some Traffic Even When ‘Always-On VPN’ Is Enabled – Slashdot

Mullvad VPN has discovered that Android leaks traffic every time the device connects to a WiFi network, even if the “Block connections without VPN,” or “Always-on VPN,” features is enabled. BleepingComputer reports: The data being leaked outside VPN tunnels includes source IP addresses, DNS lookups, HTTPS traffic, and likely also NTP traffic. This behavior is built into the Android operating system and is a design choice. However, Android users likely didn’t know this until now due to the inaccurate description of the “VPN Lockdown” features in Android’s documentation. Mullvad discovered the issue during a security audit that hasn’t been published yet, issuing a warning yesterday to raise awareness on the matter and apply additional pressure on Google.

Android offers a setting under “Network & Internet” to block network connections unless you’re using a VPN. This feature is designed to prevent accidental leaks of the user’s actual IP address if the VPN connection is interrupted or drops suddenly. Unfortunately, this feature is undercut by the need to accommodate special cases like identifying captive portals (like hotel WiFi) that must be checked before the user can log in or when using split-tunnel features. This is why Android is configured to leak some data upon connecting to a new WiFi network, regardless of whether you enabled the “Block connections without VPN” setting.

Mullvad reported the issue to Google, requesting the addition of an option to disable connectivity checks. “This is a feature request for adding the option to disable connectivity checks while “Block connections without VPN” (from now on lockdown) is enabled for a VPN app,” explains Mullvad in a feature request on Google’s Issue Tracker. “This option should be added as the current VPN lockdown behavior is to leaks connectivity check traffic (see this issue for incorrect documentation) which is not expected and might impact user privacy.” In response to Mullvad’s request, a Google engineer said this is the intended functionality and that it would not be fixed for the following reasons:

– Many VPNs actually rely on the results of these connectivity checks to function,
– The checks are neither the only nor the riskiest exemptions from VPN connections,
– The privacy impact is minimal, if not insignificant, because the leaked information is already available from the L2 connection.

Mullvad countered these points and the case remains open.

https://m.slashdot.org/story/405837

Why Reddit Is Losing It Over Samsung’s New Privacy Policy – it’s an incredible data grab

Samsung recently updated it privacy policy for all users with a Samsung account, effective Oct. 1. One Redditor read the policy, did not like what they saw, and shared it to r/android, highlighting what they consider to be the doc’s worst policy points. The thread blew up, with Android users aplenty decrying Samsung’s new policy. But why is everyone so pissed off, and is any of it worth worrying about? Let’s explore.

Samsung’s privacy policy is a bit creepy

From the jump, the new policy doesn’t look good. In fact, it appears downright invasive. There are the standard data giveaways we’ve come to expect: When you create a Samsung account, you must give over personal information like your name, age, address, email address, gender, etc. Par for the course.

However, Samsung also notes it will collect data such as credit card information, usernames and passwords for third-party services, photos, contacts, text logs, recordings of your voice generated during voice commands, and location data, including precise location data as well as nearby wifi access points and cell towers. It might come as a surprise to know a company like Samsung can keep your chat transcripts, contacts, and voice recordings, but there’s precedent: Apple found itself in hot water when third-party contractors revealed they were able to listen in on audio recordings from Siri requests, which included all kinds of personal conversations and activities.

Samsung also tracks your general activity via cookies, pixels, web beacons, and other means. The company claims this tracking is done for a variety of reasons, including remembering your information to avoid you having to retype it in the future, and to better learn how you use their services. To achieve these goals, it collects just about everything there is to know about your device, including your IP address, device model, device settings, websites you visit, and apps you download, among many others. The policy does remind you to adjust your privacy settings if you’re uncomfortable with this default tracking (as if anyone wouldn’t be).

The company says it has a lot of uses for this information, including ad delivery, communication with customers, enhancing their services, improving their business, identifying and preventing fraud and criminal activity, and to comply with “applicable legal requirements.” Further, they reserve the right to share your information with “subsidiaries and affiliates,” “business partners and third-parties,” as well as law enforcement and other authorities. In short, depending on the circumstances, your Samsung data could end up in the hands of a lot of third parties.

But that’s not everything. Under the “Notice to California Residents” section is where the juiciest policies emerge. While most of the info is the same, if broken down in a different way, there is one additional note about data Samsung collects: biometric information. The company doesn’t elaborate, but this entry implies Samsung obtains data from face and fingerprint scans, when traditionally, this information is stored on-device. Apple, for example, doesn’t have access to your face scans on your iPhone. Obviously, this is potentially concerning.

In addition, the California Residents section also discusses what data Samsung sells to third parties. Samsung says in the 12 months before this new policy went into effect, it may have sold data of yours, including device identifiers (cookies, pixel tags, etc.), purchase histories or tendencies, and network activity, including how you interact with websites.

[…]

If you’re eyeing your Galaxy Z Flip with newfound skepticism, I don’t blame you. Unfortunately, if you dive into the privacy policies for most of your other tech, you’ll be similarly disturbed. Samsung is hardly the only collecting, sharing, and selling your data.

One Redditor does make a great point about the redundancy of privacy violations here. Sure, Google might have similar policies in place, but since Samsung runs Android, you’re really dealing with two meddling companies instead, not one:

Considering the prices for their hardware, the un-removable bloatware that is generally inferior to the Google software, and anti-Right-to-Repair campaigns (and reflections in their hardware), I see no reason to buy their phones over Google’s. I’ll have just one company with intrusive insight into my personal device at a time, thank you.

[…]

Source: Why Reddit Is Losing It Over Samsung’s New Privacy Policy

Blizzard really really wants your phone number to play its games – personal data grab and security risk

When Overwatch 2 replaces the original Overwatch on Oct. 4, players will be required to link a phone number to their Battle.net accounts. If you don’t, you won’t be able to play Overwatch 2 — even if you’ve already purchased Overwatch. The same two-factor step, called SMS Protect, will also be used on all Call of Duty: Modern Warfare 2 accounts when that game launches, and new Call of Duty: Modern Warfare accounts.

Blizzard Entertainment announced SMS Protect and other safety measures ahead of Overwatch 2’s release. Blizzard said it implemented these controls because it wanted to “protect the integrity of gameplay and promote positive behavior in Overwatch 2.”

[…]

SMS Protect is a security feature that has two purposes: to keep players accountable for what Blizzard calls “disruptive behavior,” and to protect accounts if they’re hacked. It requires all Overwatch 2 players to attach a unique phone number to their account. Blizzard said SMS Protect will target cheaters and harassers; if an account is banned, it’ll be harder for them to return to Overwatch 2. You can’t just enter any old phone number — you actually have to have access to a phone receiving texts to that number to get into your account.

[…]

Blizzard said these phone notifications will be used to approve password resets — meaning someone else won’t be able to change your password without the notification code it’ll send to your mobile phone. Blizzard said it will also send you a text message if your account is locked out after a “a suspicious login attempt,” or if your password or security features are changed.

Source: Overwatch 2 SMS Protect: What is it? Why does Blizzard require my phone number? – Polygon

So this is a piece of ‘real’ information you have to give them – but what if you move country and mobile phone? what if you lose your mobile? what if they get hacked (again) and take your number? It’s either something that does get changed or is very hard to change. It shows you that basically Blizzard sees your data as something they can grab onto for free – you are  their product. Even though the games are technically free to play, in practice they make a killing off the items you buy ingame in order to be cool

They will probably get away with it though, just as they got away with installing spyware on your PC or when you attend their events under pretty flimsy pretenses.

This Controversial Artist Matches Influencer Photoshoots With Surveillance Footage

It’s an increasingly common sight on vacation, particularly in tourist destinations: An influencer sets up in front of a popular local landmark, sometimes even using props (coffee, beer, pets) or changing outfits, as a photographer or self-timed camera snaps away. Others are milling around, sometimes watching. But often, unbeknownst to everyone involved, another device is also recording the scene: a surveillance camera.

Belgian artist Dries Depoorter is exploring this dynamic in his controversial new online exhibit, The Followers, which he unveiled last week. The art project places static Instagram images side-by-side with video from surveillance cameras, which recorded footage of the photoshoot in question.

On its face, The Followers is an attempt, like many other studies, art projects and documentaries in recent years, to expose the staged, often unattainable ideals shown in many Instagram and influencer photos posted online. But The Followers also tells a darker story: one of increasingly worrisome privacy concerns amid an ever-growing network of surveillance technology in public spaces. And the project, as well as the techniques used to create it, has sparked both ethical and legal controversy.

To make The Followers, Depoorter started with EarthCam, a network of publicly accessible webcams around the world, to record a month’s worth of footage in tourist attractions like New York City’s Times Square and Dublin’s Temple Bar Pub. Then he enlisted an artificial intelligence (A.I.) bot, which scraped public Instagram photos taken in those locations, and facial-recognition software, which paired the Instagram images with the real-time surveillance footage.

Depoorter calls himself a “surveillance artist,” and this isn’t his first project using open-source webcam footage or A.I. Last year, for a project called The Flemish Scrollers, he paired livestream video of Belgian government proceedings with an A.I. bot he built to determine how often lawmakers were scrolling on their phones during official meetings.

“The idea [for The Followers] popped in my head when I watched an open camera and someone was taking pictures for like 30 minutes,” Depoorter tells Vice’s Samantha Cole. He wondered if he’d be able to find that person on Instagram.

[…]

The Followers has also hit some legal snags since going live. The project was originally up on YouTube, but EarthCam filed a copyright claim, and the piece has since been taken down. Depoorter tells Hyperallergic that he’s attempting to resolve the claim and get the videos re-uploaded. (The project is still available to view on the official website and the artist’s Twitter).

Depoorter hasn’t replied directly to much of the criticism, but he tells Input he wants the art to speak for itself. “I know which questions it raises, this kind of project,” he says. “But I don’t answer the question itself. I don’t want to put a lesson into the world. I just want to show the dangers of new technologies.”

Source: This Controversial Artist Matches Influencer Photos With Surveillance Footage | Smart News| Smithsonian Magazine

Fitbit accounts are being replaced by Google accounts

New Fitbit users will be required to sign-up with a Google account, from next year, while it also appears one will be needed to access some of the new features in years to come.

Google has been slowly integrating Fitbit into the fold since buying the company back in November 2019. Indeed, the latest products are now known as “Fitbit by Google”. However, as it currently stands, device owners have been able to maintain separate accounts for Google and Fitbit accounts.

Google has now revealed it is bringing Google Accounts to Fitbit in 2023, enabling a single login for both services. From that point on, all new sign ups will be through Google. Fitbit accounts will only be supported until 2025.

From that point on, a Google account will be the only way to go. To aid the transition, once the introduction of Google accounts begins, it’ll be possible to move existing devices over while maintaining all of the recorded data.

[…]

“We’ll be transparent with our customers about the timeline for ending Fitbit accounts through notices within the Fitbit app, by email, and in help articles.”

Whether that will be enough to assuage the concerns of the Fitbit user base – who didn’t have a say on whether Google bought their personal fitness data – remains to be seen.

Source: Fitbit accounts are being replaced by Google accounts | Trusted Reviews

So wonderful cloud – first of all, why should this data go to the cloud anyway? Second, you thought you were giving it to one provider but it turns out you’re giving it to another with no opt-out other than trashing an expensive piece of hardware.

US Military Bought Mass Monitoring Tool That Includes Internet Browsing, Email Data, Cookies from guy who helps run TOR

Multiple branches of the U.S. military have bought access to a powerful internet monitoring tool that claims to cover over 90 percent of the world’s internet traffic, and which in some cases provides access to people’s email data, browsing history, and other information such as their sensitive internet cookies, according to contracting data and other documents reviewed by Motherboard.

Additionally, Sen. Ron Wyden says that a whistleblower has contacted his office concerning the alleged warrantless use and purchase of this data by NCIS, a civilian law enforcement agency that’s part of the Navy, after filing a complaint through the official reporting process with the Department of Defense, according to a copy of the letter shared by Wyden’s office with Motherboard.

The material reveals the sale and use of a previously little known monitoring capability that is powered by data purchases from the private sector. The tool, called Augury, is developed by cybersecurity firm Team Cymru and bundles a massive amount of data together and makes it available to government and corporate customers as a paid service. In the private industry, cybersecurity analysts use it for following hackers’ activity or attributing cyberattacks. In the government world, analysts can do the same, but agencies that deal with criminal investigations have also purchased the capability. The military agencies did not describe their use cases for the tool. However, the sale of the tool still highlights how Team Cymru obtains this controversial data and then sells it as a business, something that has alarmed multiple sources in the cybersecurity industry.

“The network data includes data from over 550 collection points worldwide, to include collection points in Europe, the Middle East, North/South America, Africa and Asia, and is updated with at least 100 billion new records each day,” a description of the Augury platform in a U.S. government procurement record reviewed by Motherboard reads. It adds that Augury provides access to “petabytes” of current and historical data.

Motherboard has found that the U.S. Navy, Army, Cyber Command, and the Defense Counterintelligence and Security Agency have collectively paid at least $3.5 million to access Augury. This allows the military to track internet usage using an incredible amount of sensitive information. Motherboard has extensively covered how U.S. agencies gain access to data that in some cases would require a warrant or other legal mechanism by simply purchasing data that is available commercially from private companies. Most often, the sales center around location data harvested from smartphones. The Augury purchases show that this approach of buying access to data also extends to information more directly related to internet usage.

[…]

The Augury platform makes a wide array of different types of internet data available to its users, according to online procurement records. These types of data include packet capture data (PCAP) related to email, remote desktop, and file sharing protocols. PCAP generally refers to a full capture of data, and encompasses very detailed information about network activity. PCAP data includes the request sent from one server to another, and the response from that server too.

[…]

Augury also contains so-called netflow data, which creates a picture of traffic flow and volume across a network. That can include which server communicated with another, which is information that may ordinarily only be available to the server owner themselves or to the internet service provider that is carrying the traffic. That netflow data can be used for following traffic through virtual private networks, and show the server they are ultimately connecting from.

[…]

Team Cymru obtains this netflow data from ISPs; in return, Team Cymru provides the ISPs with threat intelligence. That transfer of data is likely happening without the informed consent of the ISPs’ users. A source familiar with the netflow data previously told Motherboard that “the users almost certainly don’t [know]” their data is being provided to Team Cymru, who then sells access to it.

It is not clear where exactly Team Cymru obtains the PCAP and other more sensitive information, whether that’s from ISPs or another method.

[…]

Beyond his day job as CEO of Team Cymru, Rabbi Rob Thomas also sits on the board of the Tor Project, a privacy focused non-profit that maintains the Tor software. That software is what underpins the Tor anonymity network, a collection of thousands of volunteer-run servers that allow anyone to anonymously browse the internet.

“Just like Tor users, the developers, researchers, and founders who’ve made Tor possible are a diverse group of people. But all of the people who have been involved in Tor are united by a common belief: internet users should have private access to an uncensored web,” the Tor Project’s website reads.

[…]

Source: Revealed: US Military Bought Mass Monitoring Tool That Includes Internet Browsing, Email Data

Meta sued for allegedly secretly tracking iPhone users

Meta was sued on Wednesday for alleged undisclosed tracking and data collection in its Facebook and Instagram apps on Apple iPhones.

The lawsuit [PDF], filed in a US federal district court in San Francisco, claims that the two applications incorporate use their own browser known as a WKWebView that injects JavaScript code to gather data that would otherwise be unavailable if the apps opened links in the default standalone browser designated by iPhone users.

The claim is based on the findings of security researcher Felix Krause, who last month published an analysis of how WKWebView browsers embedded within native applications can be abused to track people and violate privacy expectations.

“When users click on a link within the Facebook app, Meta automatically directs them to the in-app browser it is monitoring instead of the smartphone’s default browser, without telling users that this is happening or they are being tracked,” the complaint says.

“The user information Meta intercepts, monitors and records includes personally identifiable information, private health details, text entries, and other sensitive confidential facts.”

[…]

However, Meta’s use of in-app browsers in its mobile apps predates Apple’s ATT initiative. Apple introduced WKWebView at its 2014 Worldwide Developer Conference as a replacement for its older UIWebView (UIKit) and WebView (AppKit) frameworks. That was in iOS 8. With the arrival of iOS 9, as described at WWDC 2015, there was another option, SFSafariViewController. Presently this is what’s recommended for displaying a website within an app.

And the company’s use of in-app browsers has elicited concern before.

“On top of limited features, WebViews can also be used for effectively conducting intended man-in-the-middle attacks, since the IAB [in-app browser] developer can arbitrarily inject JavaScript code and also intercept network traffic,” wrote Thomas Steiner, a Google developer relations engineer, in a blog post three years ago.

In his post, Steiner emphasizes that he didn’t see anything unusual like a “phoning home” function.

Krause has taken a similar line, noting only the potential for abuse. In a follow-up post, he identified additional data gathering code.

He wrote, “Instagram iOS subscribes to every tap on any button, link, image or other component on external websites rendered inside the Instagram app” and also “subscribes to every time the user selects a UI element (like a text field) on third party websites rendered inside the Instagram app.”

However, “subscribes” simply means that analytics data is accessible within the app, without offering any conclusion about what, if anything, is done with the data. Krause also points out that since 2020, Apple has offered a framework called WKContentWorld that isolates the web environment from scripts. Developers using an in-app browser can implement WKContentWorld in order to make scripts undetectable from the outside, he said.

Whatever Meta is doing internally with its in-app browser, and even given the company’s insistence its injected script validates ATT settings, the plaintiffs suing the company argue there was no disclosure of the process.

“Meta fails to disclose the consequences of browsing, navigating, and communicating with third-party websites from within Facebook’s in-app browser – namely, that doing so overrides their default browser’s privacy settings, which users rely on to block and prevent tracking,” the complaint says. “Similarly, Meta conceals the fact that it injects JavaScript that alters external third-party websites so that it can intercept, track, and record data that it otherwise could not access.”

[…]

Source: Meta sued for allegedly secretly tracking iPhone users • The Register

Google now lets you request the removal of search results that contain personal data

Google is releasing a tool that makes it easier to remove search results containing your address, phone number and other personally identifiable information, 9to5Google has reported. It first revealed the “results about you” feature at I/O 2022 in May, describing it as a way to “help you easily control whether your personally-identifiable information can be found in Search results.”

If you see a result with your phone number, home address or email, you can click on the three-dot menu at the top right. That opens the usual “About this result” panel, but it now contains a new “Remove result” option at the bottom of the screen. A dialog states that if the result contains one of those three things, “we can review your request more quickly.”

[…]

“It’s important to note that when we receive removal requests, we will evaluate all content on the web page to ensure that we’re not limiting the availability of other information that is broadly useful, for instance in news articles. And of course, removing contact information from Google Search doesn’t remove it from the web, which is why you may wish to contact the hosting site directly, if you’re comfortable doing so.”

[…]

Source: Google now lets you request the removal of search results that contain personal data | Engadget

Germany’s blanket data retention law is illegal, EU top court says

Germany’s general data retention law violates EU law, Europe’s top court ruled on Tuesday, dealing a blow to member states banking on blanket data collection to fight crime and safeguard national security.

The law may only be applied in circumstances where there is a serious threat to national security defined under very strict terms, the Court of Justice of the European Union (CJEU) said.

The ruling comes after major attacks by Islamist militants in France, Belgium and Britain in recent years.

Governments argue that access to data, especially that collected by telecoms operators, can help prevent such incidents, while operators and civil rights activists oppose such access.

The latest case was triggered after Deutsche Telekom (DTEGn.DE) unit Telekom Deutschland and internet service provider SpaceNet AG challenged Germany’s data retention law arguing it breached EU rules.

The German court subsequently sought the advice of the CJEU which said such data retention can only be allowed under very strict conditions.

“The Court of Justice confirms that EU law precludes the general and indiscriminate retention of traffic and location data, except in the case of a serious threat to national security,” the judges said.

“However, in order to combat serious crime, the member states may, in strict compliance with the principle of proportionality, provide for, inter alia, the targeted or expedited retention of such data and the general and indiscriminate retention of IP addresses,” they said.

Source: Germany’s blanket data retention law is illegal, EU top court says | Reuters

Excellent work by the court – targeted investigation has been proven to be much more effective than blanket surveillance. Other than that blanket surveillance turns your country into an Orwellian nightmare.

DHS built huge database from cellphones, computers seized at border, searchable without a warrant, kept for 15 years

U.S. government officials are adding data from as many as 10,000 electronic devices each year to a massive database they’ve compiled from cellphones, iPads and computers seized from travelers at the country’s airports, seaports and border crossings, leaders of Customs and Border Protection told congressional staff in a briefing this summer.

The rapid expansion of the database and the ability of 2,700 CBP officers to access it without a warrant — two details not previously known about the database — have raised alarms in Congress about what use the government has made of the information, much of which is captured from people not suspected of any crime. CBP officials told congressional staff the data is maintained for 15 years.

[…]

Agents from the FBI and Immigration and Customs Enforcement, another Department of Homeland Security agency, have run facial recognition searches on millions of Americans’ driver’s license photos. They have tapped private databases of people’s financial and utility records to learn where they live. And they have gleaned location data from license-plate reader databases that can be used to track where people drive.

[…]

the revelation that thousands of agents have access to a searchable database without public oversight is a new development in what privacy advocates and some lawmakers warn could be an infringement of Americans’ Fourth Amendment rights against unreasonable searches and seizures.

[…]

CBP officials declined, however, to answer questions about how many Americans’ phone records are in the database, how many searches have been run or how long the practice has gone on, saying it has made no additional statistics available “due to law enforcement sensitivities and national security implications.”

[…]

CBP conducted roughly 37,000 searches of travelers’ devices in the 12 months ending in October 2021, according to agency data, and more than 179 million people traveled that year through U.S. ports of entry. The agency has not given a precise number of how many of those devices had their contents uploaded to the database for long-term review.

[…]

The CBP directive gives officers the authority to look and scroll through any traveler’s device using what’s known as a “basic search,” and any traveler who refuses to unlock their phone for this process can have it confiscated for up to five days.

In a 2018 filing, a CBP official said an officer could access any device, including in cases where they have no suspicion the traveler has done anything wrong, and look at anything that “would ordinarily be visible by scrolling through the phone manually,” including contact lists, calendar entries, messages, photos and videos.

If officers have a “reasonable suspicion” that the traveler is breaking the law or poses a “national security concern,” they can run an “advanced search,” connecting the phone to a device that copies its contents. That data is then stored in the Automated Targeting System database, which CBP officials can search at any time.

Faiza Patel, the senior director of the Liberty and National Security Program at the Brennan Center for Justice, a New York think tank, said the threshold for such searches is so low that the authorities could end up grabbing data from “a lot of people in addition to potential ‘bad guys,’” with some “targeted because they look a certain way or have a certain religion.”

[…]

The CBP directive on device searches was issued several years after a federal appeals court ruled that a forensic copying of a suspect’s hard drive had been “essentially a computer strip search” and said officials’ concerns about crime did “not justify unfettered crime-fighting searches or an unregulated assault on citizens’ private information.”

The Wyden aide also said that the CBP database does not require officers to record the purpose of their search, a common technical safeguard against data-access misuse. CBP officials said all searches are tracked for later audit.

[…]

CBP officials give travelers a printed document saying that the searches are “mandatory,” but the document does not mention that data can be retained for 15 years and that thousands of officials will have access to it.

Officers are also not required to give the document to travelers before the search, meaning that some travelers may not fully understand their rights to refuse the search until after they’ve handed over their phones, the Wyden aide said.

CBP officials did not say which technology they used to capture data from phones and laptops, but federal documents show the agency has previously used forensic tools, made by companies such as Cellebrite and Grayshift, to access devices and extract their contents.

[…]

Source: DHS built huge database from cellphones, computers seized at border – The Washington Post

S.Korea fines Google, Meta billions of won for privacy violations

[…] In a statement, the Personal Information Protection Commission said it fined Google 69.2 billion won ($50 million) and Meta 30.8 billion won ($22 million).

The privacy panel said the firms did not clearly inform service users and obtain their prior consent when collecting and analysing behavioural information to infer their interests or use them for customised advertisements.

[…]

Source: S.Korea fines Google, Meta billions of won for privacy violations | Reuters

A Dad Took Photos of His Naked Toddler for the Doctor. Google Flagged Him as a Criminal, destroyed his digital life with no recourse

It was a Friday night in February 2021. His wife called an advice nurse at their health care provider to schedule an emergency consultation for the next morning, by video because it was a Saturday and there was a pandemic going on. The nurse said to send photos so the doctor could review them in advance.

Mark’s wife grabbed her husband’s phone and texted a few high-quality close-ups of their son’s groin area to her iPhone so she could upload them to the health care provider’s messaging system. In one, Mark’s hand was visible, helping to better display the swelling. Mark and his wife gave no thought to the tech giants that made this quick capture and exchange of digital data possible, or what those giants might think of the images.

[…]

the episode left Mark with a much larger problem, one that would cost him more than a decade of contacts, emails and photos, and make him the target of a police investigation. Mark, who asked to be identified only by his first name for fear of potential reputational harm, had been caught in an algorithmic net designed to snare people exchanging child sexual abuse material.

[…]

“There could be tens, hundreds, thousands more of these,” he said.

Given the toxic nature of the accusations, Callas speculated that most people wrongfully flagged would not publicize what had happened.

“I knew that these companies were watching and that privacy is not what we would hope it to be,” Mark said. “But I haven’t done anything wrong.”

Police agreed. Google did not.

[…]

Two days after taking the photos of his son, Mark’s phone made a blooping notification noise: His account had been disabled because of “harmful content” that was “a severe violation of Google’s policies and might be illegal.” A “learn more” link led to a list of possible reasons, including “child sexual abuse and exploitation.”

Mark was confused at first but then remembered his son’s infection. “Oh, God, Google probably thinks that was child porn,” he thought.

[…]

He filled out a form requesting a review of Google’s decision, explaining his son’s infection. At the same time, he discovered the domino effect of Google’s rejection. Not only did he lose emails, contact information for friends and former colleagues, and documentation of his son’s first years of life, his Google Fi account shut down, meaning he had to get a new phone number with another carrier. Without access to his old phone number and email address, he couldn’t get the security codes he needed to sign in to other internet accounts, locking him out of much of his digital life.

[…]

A few days after Mark filed the appeal, Google responded that it would not reinstate the account, with no further explanation.

Mark didn’t know it, but Google’s review team had also flagged a video he made and the San Francisco Police Department had already started to investigate him.

[…]

Cassio was in the middle of buying a house, and signing countless digital documents, when his Gmail account was disabled. He asked his mortgage broker to switch his email address, which made the broker suspicious until Cassio’s real estate agent vouched for him.

[…]

In December, Mark received a manila envelope in the mail from the San Francisco Police Department. It contained a letter informing him that he had been investigated as well as copies of the search warrants served on Google and his internet service provider. An investigator, whose contact information was provided, had asked for everything in Mark’s Google account: his internet searches, his location history, his messages and any document, photo and video he’d stored with the company.

The search, related to “child exploitation videos,” had taken place in February, within a week of his taking the photos of his son.

Mark called the investigator, Nicholas Hillard, who said the case was closed. Hillard had tried to get in touch with Mark, but his phone number and email address hadn’t worked.

“I determined that the incident did not meet the elements of a crime and that no crime occurred,” Hillard wrote in his report. Police had access to all the information Google had on Mark and decided it did not constitute child abuse or exploitation.

Mark asked if Hillard could tell Google that he was innocent so he could get his account back.

“You have to talk to Google,” Hillard said, according to Mark. “There’s nothing I can do.”

Mark appealed his case to Google again, providing the police report, but to no avail. After getting a notice two months ago that his account was being permanently deleted, Mark spoke with a lawyer about suing Google and how much it might cost.

“I decided it was probably not worth $7,000,” he said.

[…]

False positives, when people are erroneously flagged, are inevitable given the billions of images being scanned. While most people would probably consider that trade-off worthwhile, given the benefit of identifying abused children, Klonick said companies need a “robust process” for clearing and reinstating innocent people who are mistakenly flagged.

“This would be problematic if it were just a case of content moderation and censorship,” Klonick said. “But this is doubly dangerous in that it also results in someone being reported to law enforcement.”

It could have been worse, she said, with a parent potentially losing custody of a child. “You could imagine how this might escalate,” Klonick said.

Cassio was also investigated by police. A detective from the Houston Police department called this past fall, asking him to come into the station.

After Cassio showed the detective his communications with the pediatrician, he was quickly cleared. But he, too, was unable to get his decade-old Google account back, despite being a paying user of Google’s web services.

[…]

Source: A Dad Took Photos of His Naked Toddler for the Doctor. Google Flagged Him as a Criminal.

Oracle facing class action over ‘brokering’ personal data of 5 billion people

Oracle is the subject of a class-action suit alleging the software giant created a network containing personal information of hundreds of millions of people and sold the data to third parties.

The case [PDF] is being brought by Johnny Ryan, formerly a policy officer at Brave, maker of the privacy-centric browser, and now part of the Irish Council for Civil Liberties (ICCL), who was behind several challenges to Google, Amazon, and Microsoft’s online advertising businesses.

The ICCL claims Oracle has amassed detailed dossiers on 5 billion people which generates $42.4 billion in annual revenue.

The allegations appear to be based, in part, on an Oracle presentation from 2016 in which Oracle CTO and founder Larry Ellison described how data was collected so businesses could predict purchasing patterns among consumers.

Ellison said at the time [1:15 onward]: “It is a combination of real-time looking at all of their social activity, real-time looking at where they are including, micro-locations – and this is scaring the lawyers [who] are shaking their heads and putting their hands over their eyes – knowing how much time you spend in a specific aisle of a specific store and what is in that aisle of a store. As we collect information about consumers and you combine that with their demographic profile, and their past purchasing behavior, we can do a pretty good job of predicting what they’re going to buy next.”

The ICCL claims Oracle’s dossiers about people include names, home addresses, emails, purchases online and in the real world, physical movements in the real world, income, interests and political views, and a detailed account of online activity.

[…]

 

Source: Oracle facing class action over ‘brokering’ personal data • The Register

Meta fined $402 million in EU over Instagram’s privacy settings for children

Meta has been fined €405 million ($402 million) by the Irish Data Protection Commission for its handling of children’s privacy settings on Instagram, which violated Europe’s General Data Protection Regulation (GDPR). As Politico reports, it’s the second-largest fine to come out of Europe’s GDPR laws, and the third (and largest) fine levied against Meta by the regulator.

A spokesperson for the DPC confirmed the fine, and said additional details about the decision would be available next week. The fine stems from the photo sharing app’s privacy settings on accounts run by children. The DPC had been investigating Instagram over children’s use of business accounts, which made personal data like email addresses and phone numbers publicly visible. The investigation also covered Instagram’s policy of defaulting all new accounts, including teens, to be publicly viewable.

[…]

Source: Meta faces $402 million EU fine over Instagram’s privacy settings for children | Engadget

Major VPN services shut down in India over anti-privacy law

[…]

New rules from India’s Computer Emergency Response Team

India’s Computer Emergency Response Team (CERT) has said that new rules will apply to VPN providers from September 25. These will require services to collect customer names, email addresses, and IP addresses. The data must be retained for at least five years, and handed over to CERT on demand.

This would breach the privacy standards of major VPN services, and be physically impossible for services like NordVPN, which keep no logs as a matter of policy. The company is registered in Panama specifically because there are no data-retention laws there, and no international intelligence sharing.

Major VPN services shut down Indian servers

The Wall Street Journal reports that major VPN services have shut down their Indian servers.

Major global providers of virtual private networks, which let internet users shield their identities online, are shutting down their servers in India to protest new government rules they say threaten their customers’ privacy […]

Such rules are “typically introduced by authoritarian governments in order to gain more control over their citizens,” said a spokeswoman for Nord Security, provider of NordVPN, which has stopped operating its servers in India. “If democracies follow the same path, it has the potential to affect people’s privacy as well as their freedom of speech,” she said […]

Other VPN services that have stopped operating servers in India in recent months are some of the world’s best known. They include U.S.-based Private Internet Access and IPVanish, Canada-based TunnelBear, British Virgin Islands-based ExpressVPN, and Lithuania-based Surfshark.

ExpressVPN said it “refuses to participate in the Indian government’s attempts to limit internet freedom.”

The government’s move “severely undermines the online privacy of Indian residents,” Private Internet Access said.

Customers in India will be able to connect to VPN servers in other countries. This is the same approach taken in Russia and China, where operating servers within those countries would require VPN companies to comply with similar legislation.

[…]

Source: Major VPN services shut down in India over anti-privacy law

FTC Sues Broker Kochava Over Geolocation Data Sales, giving away the data for free for 61m devices

[…] Commissioners voted 4-1 this week to bring a suit against Kochava, Inc., which calls itself the “industry leader for mobile app attribution” and sells mobile geo-location data on hundreds of millions of people. The suit accuses the company of violating the FTC Act, and the agency warns that the company’s business practices could easily be used to unmask the locations of vulnerable individuals—including visitors to reproductive health clinics, homeless and domestic violence shelters, places of worship, and addiction recovery centers.

Kochava, which is based in Idaho, sells “customized data feeds” that can be used to identify and track specific phone users, the FTC said in the suit. Kochava collects this data through a variety of means, then repackages it in large datasets to sell to marketers. The datasets include Mobile Advertising IDs, or MAIDs—the unique identifiers for mobile devices used in targeted advertising—as well as timestamped latitude and longitude coordinates for each device (i.e., the approximate location of the user). The data is ostensibly anonymized, but there are well-known ways to de-anonymize it. The suit claims that Kochava is aware of this, as it has allegedly suggested using its data “to map individual devices to households.”

Subscribing to Kochava’s feeds typically requires a hefty fee, but the FTC says that, until at least June, Kochava also granted interested users free access to a sample of the data. This “free sample” apparently included the location data of about 61 million mobile devices. Authorities say that there were “only minimal steps and no restrictions on usage” of this freely offered information.

[…]

Source: FTC Sues Broker Kochava Over Geolocation Data Sales

Australia fines Google $42.5 million over misleading location settings

Google is being ordered to pay A$60 million ($42.5 million) in penalties to Australia’s competition and national consumer law regulator regarding the collection and use of location data on Android phones.

The financial slap on the wrist relates to a period between January 2017 and December 2018 and follows court action by the Australian Competition and Consumer Commission (ACCC).

According to the regulators, Google misled consumers through the “Location History” setting. Some users were told, according to the ACCC, that the setting “was the only Google account setting that affected whether Google collected, kept and used personally identifiable data about their location.”

It was not. Another setting titled “Web & App Activity” also permitted data to be collected by Google. And it allowed the collection of “personally identifiable location data when it was turned on, and that setting was turned on by default,” the ACCC said.

The “misleading representations,” according to the ACCC, breach Australian consumer law and could have been viewed by the users of 1.3 million Google accounts in Australia. The figure is, however, a best estimate. We’re sure Google doesn’t collect telemetry showing where Android users navigate to either.

Privacy issues aside, the data could also be used by Google to target ads to consumers who thought they’d said no to collection.

Google “took remedial steps” and addressed the issues by December 20, 2018, but the damage was done and the ACCC instituted proceedings in October 2019. In April 2021, the Federal Court found that Google LLC (the US entity) and Google Australia Pty Ltd had breached Australian consumer law.

[…]

Google has come under fire from other quarters regarding the obtaining of customer location data without proper consent. A group of US states sued the search giant earlier this year over “dark patterns” in the user interface to get hold of location information. Then there was the whole creepy Street View Wi-Fi harvesting debacle.

[…]

Source: Australia fines Google over misleading location settings • The Register

Ring surveillance camera footage exploited for “funny clip” show

[…]Ring Nation, a new twist on the popular clip show genre, from MGM Television, Live PD producer Big Fish Entertainment and Ring.

The series, which will launch on September 26, will feature viral videos shared by people from their video doorbells and smart home cameras.

It’s a television take on a genre that has been increasingly going viral on social media.

The series will feature clips such as neighbors saving neighbors, marriage proposals, military reunions and silly animals.

[…]

Source: Wanda Sykes To Host Syndicated Viral Video Show Featuring Ring – Deadline

How this is not a really scary way to try to normalise the constant and low visibility surveillance enacted by these cameras is a puzzle to me. Making it funny that you’re being spied upon from the doors in the streets.

e-HallPass Monitors How Long Kids Are in the Bathroom Is Now in 1,000 American Schools, normalises surveillance

e-HallPass, a digital system that students have to use to request to leave their classroom and which takes note of how long they’ve been away, including to visit the bathroom, has spread into at least a thousand schools around the United States.

The system has some resemblance to the sort of worker monitoring carried out by Amazon, which tracks how long its staff go to the toilet for, and is used to penalize workers for “time off task.” It also highlights how automated tools have led to increased surveillance of students in schools, and employees in places of work.

“This product is just the latest in a growing number of student surveillance tools—designed to allow school administrators to monitor and control student behavior at scale, on and off campus,”

[…]

increased scrutiny offered by surveillance tools “has been shown to be disproportionately targeted against minorities, recent immigrants, LGBTQ kids,” and other marginalized groups.

[…]

Eduspire, the company that makes e-HallPass, told trade publication EdSurge in March that 1,000 schools use the system. Brian Tvenstrup, president of Eduspire, told the outlet that the company’s biggest obstacle to selling the product “is when a school isn’t culturally ready to make these kinds of changes yet.”

[…]

Admins can then access data collected through the software, and view a live dashboard showing details on all passes. e-HallPass can also stop meet-ups of certain students and limit the amount of passes going to certain locations, the website adds, explicitly mentioning  “vandalism and TikTok challenges.” Many of the schools Motherboard identified appear to use e-HallPass specifically on Chromebooks, according to student user guides and similar documents hosted on the schools’ websites, though it also advertises that it can be used to track students on their personal cell phones.

EdSurge reported that some people had taken to Change.org with a petition to remove the “creepy” system from a specific school. Motherboard found over a dozen similar petitions online, including one regarding Independence High School signed nearly 700 times which appears to have been written by a group of students.

[…]

 

Source: A Tool That Monitors How Long Kids Are in the Bathroom Is Now in 1,000 American Schools

Samsung adds ‘repair mode’ to smartphone

When activated, repair mode prevents a range of behaviors – from casual snooping to outright lifting of personal data – by blocking access to photos, messages, and account information.

The mode provides technicians with the access they require to make a fix, including the apps a user employs. But repairers won’t see user data in apps, so content like photos, texts and emails remains secure.

When users enable repair mode their device reboots. To exit, the user reboots again after logging in their normal way and turning the setting off.

Samsung said it is rolling out repair mode via software update, initially on the Galaxy S21 series within South Korea, with more models, and perhaps locations, getting the functionality over time.

Samsung has not explained how the feature works. Android devices already offer the chance to establish accounts for different users, so perhaps Samsung has created a role for repair technicians and made that easier to access.

Most repair technicians won’t want to view or steal a customer’s personal data – but it does happen.

Apple was forced to pay millions last year after two iPhone repair contractors allegedly stole and posted a woman’s nudes to the internet. That fiasco was in no way an isolated incident. In 2019 a Genius Bar employee allegedly texted himself explicit images taken from an iPhone he repaired and was subsequently fired.

[…]

Source: Samsung adds ‘repair mode’ to South Korean smartphone • The Register

Twitter warns of ‘record highs’ in account data requests

Twitter has published its 20th transparency report, and the details still aren’t reassuring to those concerned about abuses of personal info. The social network saw “record highs” in the number of account data requests during the July-December 2021 reporting period, with 47,572 legal demands on 198,931 accounts. The media in particular faced much more pressure. Government demands for data from verified news outlets and journalists surged 103 percent compared to the last report, with 349 accounts under scrutiny.

The largest slice of requests targeting the news industry came from India (114), followed by Turkey (78) and Russia (55). Governments succeeded in withholding 17 tweets.

As in the past, US demands represented a disproportionately large chunk of the overall volume. The country accounted for 20 percent of all worldwide account info requests, and those requests covered 39 percent of all specified accounts. Russia is still the second-largest requester with 18 percent of volume, even if its demands dipped 20 percent during the six-month timeframe.

The company said it was still denying or limiting access to info when possible. It denied 31 percent of US data requests, and either narrowed or shut down 60 percent of global demands. Twitter also opposed 29 civil attempts to identify anonymous US users, citing First Amendment reasons. It sued in two of those cases, and has so far had success with one of those suits. There hasn’t been much success in reporting on national security-related requests in the US, however, and Twitter is still hoping to win an appeal that would let it share more details.

[…]

Source: Twitter warns of ‘record highs’ in account data requests | Engadget

Records reveal the scale of Homeland Security’s phone location data purchases

Investigators raised alarm bells when they learned Homeland Security bureaus were buying phone location data to effectively bypass the Fourth Amendment requirement for a search warrant, and now it’s clearer just how extensive those purchases were. TechCrunch notes the American Civil Liberties Union has obtained records linking Customs and Border Protection, Immigration and Customs Enforcement and other DHS divisions to purchases of roughly 336,000 phone location points from the data broker Venntel. The info represents just a “small subset” of raw data from the southwestern US, and includes a burst of 113,654 points collected over just three days in 2018.

The dataset, delivered through a Freedom of Information Act request, also outlines the agencies’ attempts to justify the bulk data purchases. Officials maintained that users voluntarily offered the data, and that it included no personally identifying information. As TechCrunch explains, though, that’s not necessarily accurate. Phone owners aren’t necessarily aware they opted in to location sharing, and likely didn’t realize the government was buying that data. Moreover, the data was still tied to specific devices — it wouldn’t have been difficult for agents to link positions to individuals.

Some Homeland Security workers expressed internal concerns about the location data. One senior director warned that the Office of Science and Technology bought Venntel info without getting a necessaryPrivacy Threshold Assessment. At one point, the department even halted all projects using Venntel data after learning that key legal and privacy questions had gone unanswered.

More details could be forthcoming, as Homeland Security is still expected to provide more documents in response to the FOIA request. We’ve asked Homeland Security and Venntel for comment. However, the ACLU report might fuel legislative efforts to ban these kinds of data purchases, including the Senate’s bipartisan Fourth Amendment is Not For Sale Act as well as the more recently introduced Health and Location Data Protection Act.

Source: Records reveal the scale of Homeland Security’s phone location data purchases | Engadget