Should We Use Search History for Credit Scores? IMF Says Yes

With more services than ever collecting your data, it’s easy to start asking why anyone should care about most of it. This is why. Because people start having ideas like this.

In a new blog post for the International Monetary Fund, four researchers presented their findings from a working paper that examines the current relationship between finance and tech as well as its potential future. Gazing into their crystal ball, the researchers see the possibility of using the data from your browsing, search, and purchase history to create a more accurate mechanism for determining the credit rating of an individual or business. They believe that this approach could result in greater lending to borrowers who would potentially be denied by traditional financial institutions.

At its heart, the paper is trying to wrestle with the dawning notion that the institutional banking system is facing a serious threat from tech companies like Google, Facebook, and Apple. The researchers identify two key areas in which this is true: Tech companies have greater access to soft-information, and messaging platforms can take the place of the physical locations that banks rely on for meeting with customers.

[…]

But how would all this data be incorporated into credit ratings? Machine learning, of course. It’s black boxes all the way down.

The researchers acknowledge that there will be privacy and policy concerns related to incorporating this kind of soft-data into credit analysis. And they do little to explain how this might work in practice. The paper isn’t long, and it’s worth a read just to wrap your mind around some of the notions of fintech’s future and why everyone seems to want in on the payments game.

As it is, getting the really fine soft-data points would probably require companies like Facebook and Apple to loosen up their standards on linking unencrypted information with individual accounts. How they might share information would other institutions would be its own can of worms.

[…]

Yes, the idea of every move you make online feeding into your credit score is creepy. It may not even be possible in the near future. The IMF researchers stress that “governments should follow and carefully support the technological transition in finance. It is important to adjust policies accordingly and stay ahead of the curve.” When’s the last time a government did any of that?

Source: Should We Use Search History for Credit Scores? IMF Says Yes

Secret Agents Implicated In The Poisoning Of Opposition Leader Alexey Navalny Identified Thanks To Russia’s Black Market In Everybody’s Personal Data

Back in August, the Russian opposition leader Alexei Navalny was poisoned on a flight to Moscow. Despite initial doubts — and the usual denials by the Russian government that Vladimir Putin was involved — everyone assumed it had been carried out by the country’s FSB, successor to the KGB. Remarkable work by the open source intelligence site Bellingcat, which Techdirt first wrote about in 2014, has now established beyond reasonable doubt that FSB agents were involved:

A joint investigation between Bellingcat and The Insider, in cooperation with Der Spiegel and CNN, has discovered voluminous telecom and travel data that implicates Russia’s Federal Security Service (FSB) in the poisoning of the prominent Russian opposition politician Alexey Navalny. Moreover, the August 2020 poisoning in the Siberian city of Tomsk appears to have happened after years of surveillance, which began in 2017 shortly after Navalny first announced his intention to run for president of Russia.

That’s hardly a surprise. Perhaps more interesting for Techdirt readers is the story of how Bellingcat pieced together the evidence implicating Russian agents. The starting point was finding passengers who booked similar flights to those that Navalny took as he moved around Russia, usually earlier ones to ensure they arrived in time but without making their shadowing too obvious. Once Bellingcat had found some names that kept cropping up too often to be a coincidence, the researchers were able to draw on a unique feature of the Russian online world:

Due to porous data protection measures in Russia, it only takes some creative Googling (or Yandexing) and a few hundred euros worth of cryptocurrency to be fed through an automated payment platform, not much different than Amazon or Lexis Nexis, to acquire telephone records with geolocation data, passenger manifests, and residential data. For the records contained within multi-gigabyte database files that are not already floating around the internet via torrent networks, there is a thriving black market to buy and sell data. The humans who manually fetch this data are often low-level employees at banks, telephone companies, and police departments. Often, these data merchants providing data to resellers or direct to customers are caught and face criminal charges. For other batches of records, there are automated services either within websites or through bots on the Telegram messaging service that entirely circumvent the necessity of a human conduit to provide sensitive personal data.

The process of using these leaked resources to establish the other agents involved in the surveillance and poisoning of Navalny, and their real identities, since they naturally used false names when booking planes and cars, is discussed in fascinating detail on the Bellingcat site. But the larger point here is that strong privacy protections are good not just for citizens, but for governments too. As the Bellingcat researchers put it:

While there are obvious and terrifying privacy implications from this data market, it is clear how this environment of petty corruption and loose government enforcement can be turned against Russia’s security service officers.

As well as providing Navalny with confirmation that the Russian government at the highest levels was probably behind his near-fatal poisoning, this latest Bellingcat analysis also achieves something else that is hugely important. It has given privacy advocates a really powerful argument for why governments — even the most retrogressive and oppressive — should be passing laws to protect the personal data of every citizen effectively. Because if they don’t, clever people like Bellingcat will be able to draw on the black market resources that inevitably spring up, to reveal lots of things those in power really don’t want exposed.

Source: Secret Agents Implicated In The Poisoning Of Opposition Leader Alexey Navalny Identified Thanks To Russia’s Black Market In Everybody’s Personal Data | Techdirt

France fines Google $120M and Amazon $42M for dropping tracking cookies without consent

France’s data protection agency, the CNIL, has slapped Google and Amazon with fines for dropping tracking cookies without consent.

Google has been hit with a total of €100 million ($120 million) for dropping cookies on Google.fr and Amazon €35 million (~$42 million) for doing so on the Amazon .fr domain under the penalty notices issued today.

The regulator carried out investigations of the websites over the past year and found tracking cookies were automatically dropped when a user visited the domains in breach of the country’s Data Protection Act.

In Google’s case the CNIL has found three consent violations related to dropping non-essential cookies.

“As this type of cookies cannot be deposited without the user having expressed his consent, the restricted committee considered that the companies had not complied with the requirement provided for by article 82 of the Data Protection Act and the prior collection of the consent before the deposit of non-essential cookies,” it writes in the penalty notice [which we’ve translated from French].

Amazon was found to have made two violations, per the CNIL penalty notice.

CNIL also found that the information about the cookies provided to site visitors was inadequate — noting that a banner displayed by Google did not provide specific information about the tracking cookies the Google.fr site had already dropped.

Under local French (and European) law, site users should have been clearly informed before the cookies were dropped and asked for their consent.

In Amazon’s case its French site displayed a banner informing arriving visitors that they agreed to its use of cookies. CNIL said this did not comply with transparency or consent requirements — since it was not clear to users that the tech giant was using cookies for ad tracking. Nor were users given the opportunity to consent.

The law on tracking cookie consent has been clear in Europe for years. But in October 2019 a CJEU ruling further clarified that consent must be obtained prior to storing or accessing non-essential cookies. As we reported at the time, sites that failed to ask for consent to track were risking a big fine under EU privacy laws.

Source: France fines Google $120M and Amazon $42M for dropping tracking cookies without consent | TechCrunch

‘Save Europe from Software Patents’, Urges Nonprofit FFII – DE is trying for 3rd time using underhanded sneaky tactics

Long-time Slashdot reader zoobab shares this update about the long-standing Foundation for a Free Information Infrastructure, a Munich-based non-profit opposing ratification of a “Unified Patent Court” by Germany: The FFII is crowdfunding a constitutional complaint in Germany against the third attempt to impose software patents in Europe, calling on all software companies, independent software developers and FLOSS authors to donate.

The Unitary Patent and its Court will promote patent trolls, without any appeal possible to the European Court of Justice, which won’t be able to rule on patent law, and software patents in particular. The FFII also says that the proposed court system will be more expensive for small companies then the current national court system.
The stakes are high — so the FFII writes that they’re anticipating some tricky counter-maneuvering: Stopping the UPC in Germany will be enough to kill the UPC for the whole Europe… German government believe that they can ratify before the end of the year, as they consider the UK still a member of the EU till 31st December. The agenda of next votes have been designed on purpose to ratify the UPC before the end of the year. FFII expects dirty agenda and political hacks to declare the treaty “into force”, dismiss “constitutional complaints”, while the presence of UK is still problematic.

Source: ‘Save Europe from Software Patents’, Urges Nonprofit FFII – Slashdot

These have been batted off the table before and for very good reason.

TSA Oversight Says Agency’s Suspicionless Surveillance Program Is Worthless And The TSA Can’t Prove It Isn’t

The TSA’s “Quiet Skies” program continues to suffer under scrutiny. When details first leaked out about the TSA’s suspicionless surveillance program, even the air marshals tasked with tailing non-terrorists all over the nation seemed concerned. Marshals questioned the “legality and validity” of the program that sent them after people no government agency had conclusively tied to terrorist organizations or activities. Simply changing flights in the wrong country was enough to initiate the process.

First, the TSA lost the support of the marshals. Then it lost itself. The TSA admitted during a Congressional hearing that it had trailed over 5,000 travelers (in less than four months!) but had yet to turn up even a single terrorist. Nonetheless, it stated it would continue to trail thousands of people a year, presumably in hopes of preventing another zero terrorist attacks.

Then it lost the Government Accountability Office. The GAO’s investigation of the program contained more investigative activity than the program itself. According to its report, the TSA felt surveillance was good but measuring the outcome was bad. When you’re trailing 5,000 people and stopping zero terrorists, the less you know, the better. Not being able to track effectiveness appeared to be a feature of “Quiet Skies,” rather than a bug.

Now it’s lost the TSA’s Inspector General. The title of the report [PDF] underplays the findings, stating the obvious while also understating the obvious: TSA Needs to Improve Management of the Quiet Skies Program. A good alternative title would be “TSA Needs to Scrap the Quiet Skies Program Until it Can Come Up with Something that Might Actually Stop Terrorists.”

I mean…

TSA did not properly plan, implement, and manage the Quiet Skies program to meet the program’s mission of mitigating the threat to commercial aviation posed by higher risk passengers.

In slightly more detail, the TSA did nothing to set up the program correctly or ensure it actually worked. The IG says the TSA never developed performance goals or other metrics to gauge the effectiveness of the suspicionless surveillance. It also ignored its internal guidance to more effectively deploy its ineffective program.

Here’s why:

This occurred because TSA lacked sufficient, centralized oversight to ensure the Quiet Skies program operated as intended.

[…]

Source: TSA Oversight Says Agency’s Suspicionless Surveillance Program Is Worthless And The TSA Can’t Prove It Isn’t | Techdirt

Facebook crushed rivals to maintain an illegal monopoly, the entire United States yells in Zuckerberg’s face

Facebook illegally crushed its competition and continues to do so to this day to maintain its monopoly, according to a lawsuit filed on Wednesday by the attorneys general of no fewer than 46 US states plus Guam and DC.

The lawsuit alleges that the social media giant “illegally acquired competitors in a predatory manner and cut services to smaller threats – depriving users from the benefits of competition and reducing privacy protections and services along the way – all in an effort to boost its bottom line through increased advertising revenue.”

America’s consumer watchdog the FTC is also suing the antisocial network in a parallel action, and making the same basic allegations: that Facebook has been “illegally maintaining its personal social networking monopoly through a years-long course of anticompetitive conduct.”

It’s been a long time coming but the, as alleged, privacy-invading, competition-crushing Zuckerberg spin machine that is Facebook has finally been taken on by the United States.

The action is being led by New York’s Attorney General Letitia James, and she wasn’t holding back in her declaration of legal war. “For nearly a decade, Facebook has used its dominance and monopoly power to crush smaller rivals and snuff out competition, all at the expense of everyday users,” she said. “Today, we are taking action to stand up for the millions of consumers and many small businesses that have been harmed by Facebook’s illegal behavior.”

She also highlighted the biggest complaint against Facebook by its users, a complaint that has been commonplace for nearly a decade, that it has made “billions by converting personal data into a cash cow.”

[…]

The 123-page lawsuit [PDF] dives into how what was once just a website among many others became an online monster devouring anything in its path. “Facebook illegally maintains that monopoly power by deploying a buy-or-bury strategy that thwarts competition and harms both users and advertisers. Facebook’s illegal course of conduct has been driven, in part, by fear that the company has fallen behind in important new segments and that emerging firms were ‘building networks that were competitive with’ Facebook’s and could be ‘very disruptive to’ the company’s dominance,” the lawsuit stated.

It quotes CEO Mark Zuckerberg directly and notes that the Silicon Valley goliath would ruthlessly buy up companies in order to “build a competitive moat” or “neutralize a competitor” in its bid for dominance. And notes that Facebook has “coupled its acquisition strategy with exclusionary tactics that snuffed out competitive threats and sent the message to technology firms that, in the words of one participant, if you stepped into Facebook’s turf or resisted pressure to sell, Zuckerberg would go into ‘destroy mode’ subjecting your business to the ‘wrath of Mark.’ As a result, Facebook has chilled innovation, deterred investment, and forestalled competition in the markets in which it operates, and it continues to do so.”

The lawsuit is a much tighter and angrier indictment of Facebook than a similar one lodged against Google in October by the Department of Justice. It still relies on traditional antitrust arguments, however, rather than trying to break new ground to deal with the modern internet era.

[…]

Source: Facebook crushed rivals to maintain an illegal monopoly, the entire United States yells in Zuckerberg’s face • The Register

I have been talking about this since the beginning of 2019 and it’s wonderful to see the tsunami of action happening now

Proposed U.S. Law Could Slap Twitch Streamers With Felonies For Broadcasting Copyrighted Material

According to Politico offshoot Protocol, the felony streaming proposal is the work of Republican senator Thom Tillis, who has backed similar proposals previously. It is more or less exactly what it sounds like: A proposal to turn unauthorized commercial streaming of copyrighted material—progressive policy publication The American Prospect specifically points to examples like “an album on YouTube, a video clip on Twitch, or a song in an Instagram story”—into a felony offense with a possible prison sentence. Currently, such violations, no matter how severe, are considered misdemeanors rather than felonies, because the law regards streaming as a public performance. With Twitch currently in the crosshairs of the music industry, such a change would turn up the heat on streamers and Twitch even higher—perhaps to an untenable degree. Other platforms, like YouTube, would almost certainly suffer as well.

“A felony streaming bill would likely be a chill on expression,” Katharine Trendacosta, associate director of policy and activism with the Electronic Frontier Foundation, told The American Prospect. “We already see that it’s hard enough in just civil copyright and the DMCA for people to feel comfortable asserting their rights. The chance of a felony would impact both expression and innovation.”

According to Protocol, House and Senate Judiciary Committees have agreed to package the streaming felony proposal with other controversial provisions that include the CASE act, which would establish a new court-like entity within the U.S. Copyright Office to resolve copyright disputes, and the Trademark Modernization Act, which would give the U.S. Patent and Trademark Office more flexibility to crack down on illegitimate claims from foreign countries.

Alongside the felony streaming proposal, these provisions have drawn ire from civil rights groups, digital rights nonprofits, and companies including the aforementioned Electronic Frontier Foundation, the Internet Archive, the American Library Association, and the Center for Democracy & Technology. Collectively, these groups and others penned a letter to the U.S. Senate last week.

[…]

Source: Proposed U.S. Law Could Slap Twitch Streamers With Felonies For Broadcasting Copyrighted Material

It’s incredible that not only does copyright stifle competition, but it allows a creator to create something once, get lucky and then sit on his / her arse for the rest of their lives – and  their childrens’ doing sweet fuck all and raking in dosh. And that these laws get stronger and stronger for the people who do pretty much nothing.

As if Productivity Score wasn’t creepy enough, Microsoft has patented tech for ‘meeting quality monitoring devices’ – PS is being defanged though

The slightly creepy “Productivity Score” may not be all that’s in store for Microsoft 365 users, judging by a trawl of Redmond’s patents.

One that has popped up recently concerns a “Meeting Insight Computing System“, spotted first by GeekWire, created to give meetings a quality score with a view to improving upcoming get-togethers.

It all sounds innocent enough until you read about the requirement for “quality parameters” to be collected from “meeting quality monitoring devices”, which might give some pause for thought.

Productivity Score relies on metrics captured within Microsoft 365 to assess how productive a company and its workers are. Metrics include the take-up of messaging platforms versus email. And though Microsoft has been quick to insist the motives behind the tech are pure, others have cast more of a jaundiced eye over the technology.

[…]

Meeting Insights would take things further by plugging data from a variety of devices into an algorithm in order to score the meeting. Sampling of environmental data such as air quality and the like is all well and good, but proposed sensors such as “a microphone that may, for instance, detect speech patterns consistent with boredom, fatigue, etc” as well as measuring other metrics, such as how long a person spends speaking, could also provide data to be stirred into the mix.

And if that doesn’t worry attendees, how about some more metrics to measure how focused a person is? Are they taking care of emails, messaging or enjoying a surf of the internet when they should be paying attention to the speaker? Heck, if one is taking data from a user’s computer, one could even consider the physical location of the device.

[…]

Talking to The Reg, one privacy campaigner who asked to remain anonymous said of tools such as Productivity Score and the Meeting Insight Computing System patent: “There is a simple dictum in privacy: you cannot lose data you don’t have. In other words, if you collect it you have to protect it, and that sort of data is risky to start with.

“Who do you trust? The correct answer is ‘no one’.”

Source: As if Productivity Score wasn’t creepy enough, Microsoft has patented tech for ‘meeting quality monitoring devices’ • The Register

Since then, Microsoft will remove user names from ‘Productivity Score’ feature after privacy backlash ( Geekwire )

Microsoft says it will make changes in its new Productivity Score feature, including removing the ability for companies to see data about individual users, to address concerns from privacy experts that the tech giant had effectively rolled out a new tool for snooping on workers.

“Going forward, the communications, meetings, content collaboration, teamwork, and mobility measures in Productivity Score will only aggregate data at the organization level—providing a clear measure of organization-level adoption of key features,” wrote Jared Spataro, Microsoft 365 corporate vice president, in a post this morning. “No one in the organization will be able to use Productivity Score to access data about how an individual user is using apps and services in Microsoft 365.”

The company rolled out its new “Productivity Score” feature as part of Microsoft 365 in late October. It gives companies data to understand how workers are using and adopting different forms of technology. It made headlines over the past week as reports surfaced that the tool lets managers see individual user data by default.

As originally rolled out, Productivity Score turned Microsoft 365 into a “full-fledged workplace surveillance tool,” wrote Wolfie Christl of the independent Cracked Labs digital research institute in Vienna, Austria. “Employers/managers can analyze employee activities at the individual level (!), for example, the number of days an employee has been sending emails, using the chat, using ‘mentions’ in emails etc.”

The initial version of the Productivity Score tool allowed companies to see individual user data. (Screenshot via YouTube)

Spataro wrote this morning, “We appreciate the feedback we’ve heard over the last few days and are moving quickly to respond by removing user names entirely from the product. This change will ensure that Productivity Score can’t be used to monitor individual employees.”

Poland’s Bid To Get Upload Filters Taken Out Of The EU Copyright Directive Suddenly Looks Much More Hopeful

one of the biggest defeats for users of the Internet — and for online freedom of expression — was the passage of the EU Copyright Directive last year. The law was passed using a fundamentally dishonest argument that it did not require upload filters, because they weren’t explicitly mentioned in the text. As a result, supporters of the legislation claimed, platforms would be free to use other technologies that did not threaten freedom of speech in the way that automated upload filters would do. However, as soon as the law was passed, countries like France said that the only way to implement Article 17 (originally Article 13) was through upload filters, and copyright companies started pushing for legal memes to be blocked because they now admitted that upload filters were “practically unworkable“.

This dishonesty may come back to bite supporters of the law. Techdirt reported last August that Poland submitted a formal request for upload filters to be removed from the final text. The EU’s top court, the Court of Justice of the European Union (CJEU) has just held a public hearing on this case, and as the detailed report by Paul Keller makes abundantly clear, there are lots of reason to be hopeful that Article 17’s upload filters are in trouble from a legal point of view.

The hearing was structured around four questions. Principally, the CJEU wanted to know whether Article 17 meant that upload filters were mandatory. This is a crucial question because the court has found in the past that a general obligation to monitor all user uploads for illegal activities violates the fundamental rights of Internet users and platform operators. This is why proponents of the law insisted that upload filters were not mandatory, but simply one technology that could be applied

[…]

Poland also correctly pointed out that the alternatives presented by the European institutions, such as fingerprinting, hashing, watermarking, Artificial Intelligence or keyword search, all constitute alternative methods of filtering, but not alternatives to filtering.

This is the point that every expert has been making for years: there are no viable alternatives to upload filters, which means that Article 17 necessarily imposes a general monitoring requirement, something that is not permitted under current EU law. The fact that the Advocate General Øe, who will release his own recommendations on the case early next year, made his comment about the lack of any practical alternative to upload filters is highly significant. During the hearing, representatives of the French and Spanish governments claimed that this doesn’t matter, for the following remarkable reason:

The right to intellectual property should be prioritized over freedom of expression in cases of uncertainty over the legality of user uploads, because the economic damage to copyright-holders from leaving infringements online even for a short period of time would outweigh the damage to freedom of expression of users whose legal uploads may get blocked.

The argument here seems to be that as soon as even a single illegal copy is placed online, it will be copied rapidly and spread around the Internet. But this line of reasoning undermines itself. If placing a single illegal copy online for even a short time really is enough for it to be shared widely, then it only requires a copy to be placed on a site outside the EU’s reach for copies to spread around the entire Internet anyway — because copying is so easy — which makes the speed of the takedown within the EU irrelevant.

[…]

In other words, what seemed at the time like a desperate last attempt by Poland to stop the awful upload filters, with little hope of succeeding, now looks to have a decent chance because of the important general issues it raises — something explored at greater length in a new study written by Reda and others (pdf). That’s not to say that Article 17’s upload filters are dead, but it seems like the underhand methods used to force this legislation through could turn out to be their downfall.

Source: Poland’s Bid To Get Upload Filters Taken Out Of The EU Copyright Directive Suddenly Looks Much More Hopeful | Techdirt

Privacy campaigner flags concerns about Microsoft’s creepy Productivity Score now in 365

Microsoft’s Productivity Score has put in a public appearance in Microsoft 365 and attracted the ire of privacy campaigners and activists.

The Register had already noted the vaguely creepy-sounding technology back in May. The goal of it is to use telemetry captured by the Windows behemoth to track the productivity of an organisation through metrics such as a corporate obsession with interminable meetings or just how collaborative employees are being.

The whole thing sounds vaguely disturbing in spite of Microsoft’s insistence that it was for users’ own good.

As more details have emerged, so have concerns over just how granular the level of data capture is.

Vienna-based researcher (and co-creator of Data Dealer) Wolfie Christl suggested that the new features “turns Microsoft 365 into an full-fledged workplace surveillance tool.”

Christl went on to claim that the software allows employers to dig into employee activities, checking the usage of email versus Teams and looking into email threads with @mentions. “This is so problematic at many levels,” he noted, adding: “Managers evaluating individual-level employee data is a no go,” and that there was the danger that evaluating “productivity” data can shift power from employees to organisations.

Earlier this year we put it to Microsoft corporate vice president Brad Anderson that employees might find themselves under the gimlet gaze of HR thanks to this data.

He told us: “There is no PII [personally identifiable information] data in there… it’s a valid concern, and so we’ve been very careful that as we bring that telemetry back, you know, we bring back what we need, but we stay out of the PII world.”

Microsoft did concede that there could be granularity down to the individual level although exceptions could be configured. Melissa Grant, director of product marketing for Microsoft 365, told us that Microsoft had been asked if it was possible to use the tool to check, for example, that everyone was online and working by 8 but added: “We’re not in the business of monitoring employees.”

Christl’s concerns are not limited to the Productivity Score dashboard itself, but also regarding what is going on behind the scenes in the form of the Microsoft Graph. The People API, for example, is a handy jumping off point into all manner of employee data.

For its part, Microsoft has continued to insist that Productivity Score is not a stick with which to bash employees. In a recent blog on the matter, the company stated:

To be clear, Productivity Score is not designed as a tool for monitoring employee work output and activities. In fact, we safeguard against this type of use by not providing specific information on individualized actions, and instead only analyze user-level data aggregated over a 28-day period, so you can’t see what a specific employee is working on at a given time. Productivity Score was built to help you understand how people are using productivity tools and how well the underlying technology supports them in this.

In an email to The Register, Christl retorted: “The system *does* clearly monitor employee activities. And they call it ‘Productivity Score’, which is perhaps misleading, but will make managers use it in a way managers usually use tools that claim to measure ‘productivity’.”

He added that Microsoft’s own promotional video for the technology showed a list of clearly identifiable users, which corporate veep Jared Spataro said enabled companies to “find your top communicators across activities for the last four weeks.”

We put Christl’s concerns to Microsoft and asked the company if its good intentions extended to the APIs exposed by the Microsoft Graph.

While it has yet to respond to worries about the APIs, it reiterated that the tool was compliant with privacy laws and regulations, telling us: “Productivity Score is an opt-in experience that gives IT administrators insights about technology and infrastructure usage.

It added: “Insights are intended to help organizations make the most of their technology investments by addressing common pain points like long boot times, inefficient document collaboration, or poor network connectivity. Insights are shown in aggregate over a 28-day period and are provided at the user level so that an IT admin can provide technical support and guidance.”

Source: Privacy campaigner flags concerns about Microsoft’s creepy Productivity Score • The Register

IRS contracted to Search Warrantless Location Database Over 10,000 Times

The IRS was able to query a database of location data quietly harvested from ordinary smartphone apps over 10,000 times, according to a copy of the contract between IRS and the data provider obtained by Motherboard.

The document provides more insight into what exactly the IRS wanted to do with a tool purchased from Venntel, a government contractor that sells clients access to a database of smartphone movements. The Inspector General is currently investigating the IRS for using the data without a warrant to try to track the location of Americans.

“This contract makes clear that the IRS intended to use Venntel’s spying tool to identify specific smartphone users using data collected by apps and sold onwards to shady data brokers. The IRS would have needed a warrant to obtain this kind of sensitive information from AT&T or Google,” Senator Ron Wyden told Motherboard in a statement after reviewing the contract.

[…]

Venntel sources its location data from gaming, weather, and other innocuous looking apps. An aide for the office of Senator Ron Wyden, whose office has been investigating the location data industry, previously told Motherboard that officials from Customs and Border Protection (CBP), which has also purchased Venntel products, said they believe Venntel also obtains location information from the real-time bidding that occurs when advertisers push their adverts into users’ browsing sessions.

One of the new documents says Venntel sources the location information from its “advertising analytics network and other sources.” Venntel is a subsidiary of advertising firm Gravy Analytics.

The data is “global,” according to a document obtained from CBP.

[…]

Source: IRS Could Search Warrantless Location Database Over 10,000 Times

GM launches OnStar Insurance Services – uses your driving data to calculate insurance rate

Andrew Rose, president of OnStar Insurance Services commented: “OnStar Insurance will promote safety, security and peace of mind. We aim to be an industry leader, offering insurance in an innovative way.

“GM customers who have subscribed to OnStar and connected services will be eligible to receive discounts, while also receiving fully-integrated services from OnStar Insurance Services.”

The service has been developed to improve the experience for policyholders who have an OnStar Safety & Security plan, as Automatic Crash Response has been designed to notify an OnStar Emergency-certified Advisor who can send for help.

The service is currently working with its insurance carrier partners to remove biased insurance plans by focusing on factors within the customer’s control, which includes individual vehicle usage and rewarding smart driving habits that benefit road safety.

OnStar Insurance Services plans to provide customers with personalised vehicle care and promote safer driving habits, along with a data-backed analysis of driving behaviour.

Source: General Motors launches OnStar Insurance Services – Reinsurance News

What it doesn’t say is whether it could raise insurances or deny them entirely, how transparent the reward system will be or what else they will be doing with your data.

Australia’s spy agencies caught collecting COVID-19 app data

Australia’s intelligence agencies have been caught “incidentally” collecting data from the country’s COVIDSafe contact-tracing app during the first six months of its launch, a government watchdog has found.

The report, published Monday by the Australian government’s inspector general for the intelligence community, which oversees the government’s spy and eavesdropping agencies, said the app data was scooped up “in the course of the lawful collection of other data.”

But the watchdog said that there was “no evidence” that any agency “decrypted, accessed or used any COVID app data.”

Incidental collection is a common term used by spies to describe the data that was not deliberately targeted but collected as part of a wider collection effort. This kind of collection isn’t accidental, but more of a consequence of when spy agencies tap into fiber optic cables, for example, which carries an enormous firehose of data. An Australian government spokesperson told one outlet, which first reported the news, that incidental collection can also happen as a result of the “execution of warrants.”

The report did not say when the incidental collection stopped, but noted that the agencies were “taking active steps to ensure compliance” with the law, and that the data would be “deleted as soon as practicable,” without setting a firm date.

For some, fears that a government spy agency could access COVID-19 contact-tracing data was the worst possible outcome.

[…]

Source: Australia’s spy agencies caught collecting COVID-19 app data | TechCrunch

Amazon’s ad-hoc Ring, Echo mesh network can mooch off your neighbors’ Wi-Fi if needed – and it’s opt-out

Amazon is close to launching Sidewalk – its ad-hoc wireless network for smart-home devices that taps into people’s Wi-Fi – and it is pretty much an opt-out affair.

The gist of Sidewalk is this: nearby Amazon gadgets, regardless of who owns them, can automatically organize themselves into their own private wireless network mesh, communicating primarily using Bluetooth Low Energy over short distances, and 900MHz LoRa over longer ranges.

At least one device in a mesh will likely be connected to the internet via someone’s Wi-Fi, and so, every gadget in the mesh can reach the ‘net via that bridging device. This means all the gadgets within a mesh can be remotely controlled via an app or digital assistant, either through their owners’ internet-connected Wi-Fi or by going through a suitable bridge in the mesh. If your internet goes down, your Amazon home security gizmo should still be reachable, and send out alerts, via the mesh.

It also means if your neighbor loses broadband connectivity, their devices in the Sidewalk mesh can still work over the ‘net by routing through your Sidewalk bridging device and using your home ISP connection.

[…]

Amazon Echoes, Ring Floodlight Cams, and Ring Spotlight Cams will be the first Sidewalk bridging devices as well as Sidewalk endpoints. The internet giant hopes to encourage third-party manufacturers to produce equipment that is also Sidewalk compatible, extending meshes everywhere.

Crucially, it appears Sidewalk is opt-out for those who already have the hardware, and will be opt-in for those buying new gear.

[…]

if you already have, say, an Amazon Ring, it will soon get a software update that will automatically enable Sidewalk connectivity, and you’ll get an email explaining how to switch that off. When powering up a new gizmo, you’ll at least get the chance to opt in or out.

[…]

We’re told Sidewalk will only sip your internet connection rather than hog it, limiting itself to half a gigabyte a month. This policy appears to live in hope that people aren’t on stingy monthly data caps.

[…]

Just don’t forget that Ring and the police, in the US at least, have a rather cosy relationship. While Amazon stresses that Ring owners are in control of the footage recorded by their camera-fitted doorbells, homeowners are often pressured into turning their equipment into surveillance systems for the cops.

Source: Amazon’s ad-hoc Ring, Echo mesh network can mooch off your neighbors’ Wi-Fi if needed – and it’s opt-out • The Register

Disney (Disney!) Accused Of Trying To Lawyer Its Way Out Of Paying Royalties To Alan Dean Foster, Star Wars and Alien book writer

Disney, of course, has quite the reputation as a copyright maximalist. It has been accused of being the leading company in always pushing for more draconian copyright laws. And then, of course, there’s the infamous Mickey Mouse curve, first designated a decade ago by Tom Bell, highlighting how copyright term extensions seemed to always happen just as Mickey Mouse was set to go into the public domain (though, hopefully that’s about to end):

Whether accurate or not, Disney is synonymous with maximizing copyright law, which the company and its lobbyists always justify with bullshit claims of how they do it “for the artist.”

Except that it appears that Disney is not paying artists. While the details are a bit fuzzy, yesterday the Science Fiction & Fantasy Writers of America (SFWA) and famed author Alan Dean Foster announced that Disney was no longer paying him royalties for the various Star Wars books he wrote (including the novelization of the very first film back in 1976), along with his novelizations of the Aliens movies. He claims he’d always received royalties before, but they suddenly disappeared.

Foster wrote a letter (amusingly addressed to “Mickey”) in which he lays out his side of the argument, more or less saying that as Disney has gobbled up various other companies and rights, it just stopped paying royalties:

When you purchased Lucasfilm you acquired the rights to some books I wrote. STAR WARS, the novelization of the very first film. SPLINTER OF THE MIND’S EYE, the first sequel novel. You owe me royalties on these books. You stopped paying them.

When you purchased 20th Century Fox, you eventually acquired the rights to other books I had written. The novelizations of ALIEN, ALIENS, and ALIEN 3. You’ve never paid royalties on any of these, or even issued royalty statements for them.

All these books are all still very much in print. They still earn money. For you. When one company buys another, they acquire its liabilities as well as its assets. You’re certainly reaping the benefits of the assets. I’d very much like my miniscule (though it’s not small to me) share.

[…]

In a video press conference, Foster and SFWA […] said that Disney is claiming that it purchased “the rights but not the obligations” to these works.

Source: Disney (Disney!) Accused Of Trying To Lawyer Its Way Out Of Paying Royalties To Alan Dean Foster | Techdirt

Nintendo Continues Cracking Down On People Selling Switch Hacks: jailbraking w RCM = piracy in their minds

Nintendo filed a lawsuit Wednesday against an Amazon Marketplace user who was allegedly selling devices called RCM loaders. Used to help people jailbreak their Switch, shutting these down is the latest in the company’s efforts to stop players from pirating its games.

As first reported by Polygon, the lawsuit against reseller Le Hoang Minh seeks “relief for unlawful trafficking in circumvention devices in violation of the Digital Millennium Copyright Act (DMCA).” In addition to having the Seattle District Court order Minh to stop selling the devices, Nintendo also wants $2,500 in damages for each one already sold.

“Piracy of video game software has become a serious, worsening international problem,” Nintendo’s lawyers write (without offering any further detail), arguing that the RCM loaders and other devices like them are are a big contributor to that. While jailbreaking a Switch isn’t necessarily itself against the law, pirating games is, and devices whose primary purpose is to facilitating that are also prohibited. The loaders aren’t hard to find on Amazon and other resellers, but it’s essentially the code the loaders are running to jailbreak the Switch that people buy them for and which Nintendo wants to stop the spread of.

According to the legal complaint Nintendo filed, the company originally sought to have Minh’s listings removed from Amazon by issuing DMCA-related takedowns, but Minh filed a counter-notification with Amazon to keep the listings up, forcing Nintendo to take the matter to court.

Source: Nintendo Continues Cracking Down On People Selling Switch Hacks

Just because a device can somehow be used for jailbraking doesn’t mean it always is. A bit like a phone can be used to plot a bank heist, but that isn’t the sole purpose of a phone.

The ones who brought you Let’s Encrypt, bring you: Tools for gathering anonymized app usage metrics from netizens

The Internet Security Research Group (ISRG) has a plan to allow companies to collect information about how people are using their products while protecting the privacy of those generating the data.

Today, the California-based non-profit, which operates Let’s Encrypt, introduced Prio Services, a way to gather online product metrics without compromising the personal information of product users.

“Applications such as web browsers, mobile applications, and websites generate metrics,” said Josh Aas, founder and executive director of ISRG, and Tim Geoghegan, site reliability engineer, in an announcement. “Normally they would just send all of the metrics back to the application developer, but with Prio, applications split the metrics into two anonymized and encrypted shares and upload each share to different processors that do not share data with each other.”

Prio is described in a 2017 research paper [PDF] as “a privacy-preserving system for the collection of aggregate statistics.” The system was developed by Henry Corrigan-Gibbs, then a Stanford doctoral student and currently an MIT assistant professor, and Dan Boneh, a professor of computer science and electrical engineering at Stanford.

Prio implements a cryptographic approach called secret-shared non-interactive proofs (SNIPs). According to its creators, it handles data only 5.7x slower than systems with no privacy protection. That’s considerably better than the competition: client-generated non-interactive zero-knowledge proofs of correctness (NIZKs) are 267x slower than unprotected data processing and privacy methods based on succinct non-interactive arguments of knowledge (SNARKs) clock in at three orders of magnitude slower.

“With Prio, you can get both: the aggregate statistics needed to improve an application or service and maintain the privacy of the people who are providing that data,” said Boneh in a statement. “This system offers a robust solution to two growing demands in our tech-driven economy.”

In 2018 Mozilla began testing Prio to gather Firefox telemetry data and found the cryptographic scheme compelling enough to make it the basis of its Firefox Origin Telemetry service.

[…]

Source: The ones who brought you Let’s Encrypt, bring you: Tools for gathering anonymized app usage metrics from netizens • The Register

Apple’s ‘Batterygate’ Saga Wraps Up With $113 Million Settlement

Younger readers might not know, but there was once an annual tradition in which Apple would release a new iPhone, old iPhones would suddenly start performing poorly, and users would speculate about a conspiracy to get them to buy the shiny new thing. It turned out that a conspiracy, of sorts, did exist, and Apple has been trying to make the whole embarrassing saga go away for years. On Wednesday, the finish line came into view after Arizona Attorney General Mark Brnovich announced that an investigation involving 34 states is concluding with a settlement and no admission of guilt from Apple.

In 2017, Apple admitted that updates to iOS were throttling older iPhone models but framed it as a misunderstanding. Apple said that the software tweaks were intended to mitigate unwanted shutdowns in devices with aging batteries. It apologized and offered discounted battery replacements as a consolation prize. Many users felt that Apple’s secretive approach was deceptive and intended to lead them to believe they need a new phone when a fresh battery might keep the old one going for another cycle. The discounted battery offer wasn’t enough for some users, and this spring Apple agreed to settle a class-action suit for up to $500 million, doling out $25 per phone that filed a claim. Apple did not admit any wrongdoing.

Today’s announcement tentatively concludes a separate investigation launched by state attorneys general into the controversy. In a statement, Brnovich’s office said that the proposed settlement includes a $113 million fine to be distributed amongst the states involved as well as a requirement that “Apple also must provide truthful information to consumers about iPhone battery health, performance, and power management. Apple must provide this important information in various forms on its website, in update installation notes, and in the iPhone user interface itself.”

Source: Apple’s ‘Batterygate’ Saga Wraps Up With $113 Million Settlement

Google Will Make It a bit Easier to Turn Off Smart Features which track you, Slightly Harder for Regulators to Break Up Google

Soon, Google will present you with a clear choice to disable smart features, like Google assistant reminders to pay your bills and predictive text in Gmail. Whether you like the Gmail mindreader function that autofills “all the best” and “reaching out,” or have long dreaded the arrival of the machine staring back from the void,: it’s your world, Google’s just living in it. According to Google.

We’ve always been able to disable these functions if we bothered hunting through account settings. But “in the coming weeks” Google will show a new blanket setting to “turn off smart features” which will disable features like Smart Compose, Smart Reply, in apps like Gmail; the second half of the same prompt will disable whether additional Google products—like Maps or Assistant, for example—are allowed to be personalized based on data from Gmail, Meet, and Chat.

Google writes in its blog post about the new-ish settings that humans are not looking at your emails to enable smart features, and Google ads are “not based on your personal data in Gmail,” something CEO Sundar Pichai has likewise said time and again. Google claims to have stopped that practice in 2017, although the following year the Wall Street Journal reported that third-party app developers had freely perused inboxes with little oversight. (When asked whether this is still a problem, the spokesperson pointed us to Google’s 2018 effort to tighten security.)

A Google spokesperson emphasized that the company only uses email contents for security purposes like filtering spam and phishing attempts.

These personalization changes aren’t so much about tightening security as they are another informed consent defense which Google can use to repel the current regulatory siege being waged against it by lawmakers. It has expanded incognito mode for maps and auto-deleting data in location history or web and app activity and on YouTube (though after a period of a few months).

Inquiries in the U.S. and EU have found that Google’s privacy settings have historically presented the appearance of privacy, rather than privacy itself. After a 2018 AP article exposed the extent of Google’s location data harvesting, an investigation found that turning location off in Android was no guarantee that Google wouldn’t collect location data (though Google has denied this.) Plaintiffs in a $5 billion class-action lawsuit filed this summer alleged that “incognito mode” in Chrome didn’t prevent Google from capturing and sharing their browsing history. And last year, French regulators fined Google nearly $57 million for violating the General Data Protection Regulation (GDPR) by allegedly burying privacy controls beneath five or six layers of settings. (When asked, the spokesperson said Google has no additional comment on these cases.)

So this is nice, and also Google’s announcement reads as a letter to regulators. “This new setting is designed to reduce the work of understanding and managing [a choice over how data is processed], in view of what we’ve learned from user experience research and regulators’ emphasis on comprehensible, actionable user choices over data.”

Source: Google Will Make It Easier to Turn Off Smart Features

Apple hits back at European activist lawsuit against unauthorised tracking installs – says it doesn’t use it… but 3rd parties do

The group, led by campaigner Max Schrems, filed complaints with data protection watchdogs in Germany and Spain alleging that the tracking tool illegally enabled the $2 trillion U.S. tech giant to store users’ data without their consent.

Apple directly rebutted the claims filed by Noyb, the digital rights group founded by Schrems, saying they were “factually inaccurate and we look forward to making that clear to privacy regulators should they examine the complaint”.

Schrems is a prominent figure in Europe’s digital rights movement that has resisted intrusive data-gathering by Silicon Valley’s tech platforms. He has fought two cases against Facebook, winning landmark judgments that forced the social network to change how it handles user data.

Noyb’s complaints were brought against Apple’s use of a tracking code, known as the Identifier for Advertisers (IDFA), that is automatically generated on every iPhone when it is set up.

The code, stored on the device, makes it possible to track a user’s online behaviour and consumption preferences – vital in allowing companies to send targeted adverts.

“Apple places codes that are comparable to a cookie in its phones without any consent by the user. This is a clear breach of European Union privacy laws,” Noyb lawyer Stefano Rossetti said.

Rossetti referred to the EU’s e-Privacy Directive, which requires a user’s consent before installation and using such information.

Apple said in response that it “does not access or use the IDFA on a user’s device for any purpose”.

It said its aim was to protect the privacy of its users and that the latest release of its iOS 14 operating system gave users greater control over whether apps could link with third parties for the purposes of targeted advertising.

Source: Apple hits back at European activist complaints against tracking tool | Reuters

The complaint against Apple is that the IDFA is set at all without consent from the user. And it’s not the point that Apple accesses it or not, the point is that unspecified 3rd parties (advertisers, hackers, government, etc) can.

How the U.S. Military Buys Location Data from Ordinary Apps

The U.S. military is buying the granular movement data of people around the world, harvested from innocuous-seeming apps, Motherboard has learned. The most popular app among a group Motherboard analyzed connected to this sort of data sale is a Muslim prayer and Quran app that has more than 98 million downloads worldwide. Others include a Muslim dating app, a popular Craigslist app, an app for following storms, and a “level” app that can be used to help, for example, install shelves in a bedroom.

Through public records, interviews with developers, and technical analysis, Motherboard uncovered two separate, parallel data streams that the U.S. military uses, or has used, to obtain location data. One relies on a company called Babel Street, which creates a product called Locate X. U.S. Special Operations Command (USSOCOM), a branch of the military tasked with counterterrorism, counterinsurgency, and special reconnaissance, bought access to Locate X to assist on overseas special forces operations. The other stream is through a company called X-Mode, which obtains location data directly from apps, then sells that data to contractors, and by extension, the military.

The news highlights the opaque location data industry and the fact that the U.S. military, which has infamously used other location data to target drone strikes, is purchasing access to sensitive data. Many of the users of apps involved in the data supply chain are Muslim, which is notable considering that the United States has waged a decades-long war on predominantly Muslim terror groups in the Middle East, and has killed hundreds of thousands of civilians during its military operations in Pakistan, Afghanistan, and Iraq. Motherboard does not know of any specific operations in which this type of app-based location data has been used by the U.S. military.

[…]

In March, tech publication Protocol first reported that U.S. law enforcement agencies such as Customs and Border Protection (CBP) and Immigration and Customs Enforcement (ICE) were using Locate X. Motherboard then obtained an internal Secret Service document confirming the agency’s use of the technology. Some government agencies, including CBP and the Internal Revenue Service (IRS), have also purchased access to location data from another vendor called Venntel.

“In my opinion, it is practically certain that foreign entities will try to leverage (and are almost certainly actively exploiting) similar sources of private platform user data. I think it would be naïve to assume otherwise,” Mark Tallman, assistant professor at the Department of Emergency Management and Homeland Security at the Massachusetts Maritime Academy, told Motherboard in an email.

THE SUPPLY CHAIN

Some companies obtain app location data through bidstream data, which is information gathered from the real-time bidding that occurs when advertisers pay to insert their adverts into peoples’ browsing sessions. Firms also often acquire the data from software development kits (SDKs).

[…]

In a recent interview with CNN, X-Mode CEO Joshua Anton said the company tracks 25 million devices inside the United States every month, and 40 million elsewhere, including in the European Union, Latin America, and the Asia-Pacific region. X-Mode previously told Motherboard that its SDK is embedded in around 400 apps.

In October the Australian Competition & Consumer Commission published a report about data transfers by smartphone apps. A section of that report included the endpoint—the URL some apps use—to send location data back to X-Mode. Developers of the Guardian app, which is designed to protect users from the transfer of location data, also published the endpoint. Motherboard then used that endpoint to discover which specific apps were sending location data to the broker.

Motherboard used network analysis software to observe both the Android and iOS versions of the Muslim Pro app sending granular location data to the X-Mode endpoint multiple times. Will Strafach, an iOS researcher and founder of Guardian, said he also saw the iOS version of Muslim Pro sending location data to X-Mode.

The data transfer also included the name of the wifi network the phone was currently collected to, a timestamp, and information about the phone such as its model, according to Motherboard’s tests.

[…]

 

Source: How the U.S. Military Buys Location Data from Ordinary Apps

GitHub Restores YouTube Downloader Following DMCA Takedown, starts to protect developers from DMCA misuse

Last month, GitHub removed a popular tool that is used to download videos from websites like YouTube after it received a DMCA takedown notice from the Recording Industry Association of America. For a moment, it seemed that GitHub might throw developers under the bus in the same fashion that Twitch has recently treated its streamers. But on Monday, GitHub went on the offense by reinstating the offending tool and saying it would take a more aggressive line on protecting developers’ projects.

Youtube-dl is a command-line program that could, hypothetically, be used to make unauthorized copies of copyrighted material. This potential for abuse prompted the RIAA to send GitHub a scary takedown notice because that’s what the RIAA does all day. The software development platform complied with the notice and unleashed a user outcry over the loss of one of the most popular repositories on the site. Many developers started re-uploading the code to GitHub in protest. After taking some time to review the case, GitHub now says that youtube-dl is all good.

In a statement, GitHub’s Director of Platform Policy Abby Vollmer wrote that there are two reasons that it was able to reverse the decision. The first reason is that the RIAA cited one repo that used the youtube-dl source code and contained references to a few copyrighted songs on YouTube. This was only part of a unit test that the code performs. It listens to a few seconds of the song to verify that everything is working properly but it doesn’t download or distribute any material. Regardless, GitHub worked with the developer to patch out the references and stay on the safe side.

As for the primary youtube-dl source code, lawyers at the Electronic Frontier Foundation decided to represent the developers and presented an argument that satisfied GitHub’s concerns that the code circumvents technical measures to protect copyrighted material in violation of Section 1201 of the Digital Millennium Copyright Act. The EFF explained that youtube-dl doesn’t decrypt anything or breakthrough any anti-copying measures. From a technical standpoint, it isn’t much different than a web browser receiving information as intended, and there are plenty of fair use applications for making a copy of materials.

Among the “many legitimate purposes” for using youtube-dl, GitHub listed: “changing playback speeds for accessibility, preserving evidence in the fight for human rights, aiding journalists in fact-checking, and downloading Creative Commons-licensed or public domain videos.” The EFF cited some of the same practical uses and had a few unique additions to its list of benefits, saying that it could be used by “educators to save videos for classroom use, by YouTubers to save backup copies of their own uploaded videos, and by users worldwide to watch videos on hardware that can’t run a standard web browser, or to watch videos in their full resolution over slow or unreliable Internet connections.”

It’s nice to see GitHub evaluating the argument and moving forward without waiting for a legal process to play out, but the company went further in announcing a new eight-step process for evaluating claims related to Section 1201 that will err on the side of developers. GitHub is also establishing a million-dollar legal fund to provide assistance to open source developers fighting off unwarranted takedown notices. Mea culpa, mea culpa!

Finally, the company said that it would work to improve the law around DMCA notices and it will be “advocating specifically on the anti-circumvention provisions of the DMCA to promote developers’ freedom to build socially beneficial tools like youtube-dl.”

Along with today’s announcement, GitHub CEO Nat Friedman tweeted, “Section 1201 of the DMCA is broken and needs to be fixed. Developers should have the freedom to tinker.”

Source: GitHub Restores YouTube Downloader Following DMCA Takedown

It’s nice to see a large company come down on the right side of copyright for a change.

Your Computer isn’t Yours – Apple edition – how is it snooping on you, why can’t you start apps when their server is down

It’s here. It happened. Did you notice?

I’m speaking, of course, of the world that Richard Stallman predicted in 1997. The one Cory Doctorow also warned us about.

On modern versions of macOS, you simply can’t power on your computer, launch a text editor or eBook reader, and write or read, without a log of your activity being transmitted and stored.

It turns out that in the current version of the macOS, the OS sends to Apple a hash (unique identifier) of each and every program you run, when you run it. Lots of people didn’t realize this, because it’s silent and invisible and it fails instantly and gracefully when you’re offline, but today the server got really slow and it didn’t hit the fail-fast code path, and everyone’s apps failed to open if they were connected to the internet.

Because it does this using the internet, the server sees your IP, of course, and knows what time the request came in. An IP address allows for coarse, city-level and ISP-level geolocation, and allows for a table that has the following headings:

Date, Time, Computer, ISP, City, State, Application Hash

Apple (or anyone else) can, of course, calculate these hashes for common programs: everything in the App Store, the Creative Cloud, Tor Browser, cracking or reverse engineering tools, whatever.

This means that Apple knows when you’re at home. When you’re at work. What apps you open there, and how often. They know when you open Premiere over at a friend’s house on their Wi-Fi, and they know when you open Tor Browser in a hotel on a trip to another city.

“Who cares?” I hear you asking.

Well, it’s not just Apple. This information doesn’t stay with them:

  1. These OCSP requests are transmitted unencrypted. Everyone who can see the network can see these, including your ISP and anyone who has tapped their cables.
  2. These requests go to a third-party CDN run by another company, Akamai.
  3. Since October of 2012, Apple is a partner in the US military intelligence community’s PRISM spying program, which grants the US federal police and military unfettered access to this data without a warrant, any time they ask for it. In the first half of 2019 they did this over 18,000 times, and another 17,500+ times in the second half of 2019.

This data amounts to a tremendous trove of data about your life and habits, and allows someone possessing all of it to identify your movement and activity patterns. For some people, this can even pose a physical danger to them.

Now, it’s been possible up until today to block this sort of stuff on your Mac using a program called Little Snitch (really, the only thing keeping me using macOS at this point). In the default configuration, it blanket allows all of this computer-to-Apple communication, but you can disable those default rules and go on to approve or deny each of these connections, and your computer will continue to work fine without snitching on you to Apple.

The version of macOS that was released today, 11.0, also known as Big Sur, has new APIs that prevent Little Snitch from working the same way. The new APIs don’t permit Little Snitch to inspect or block any OS level processes. Additionally, the new rules in macOS 11 even hobble VPNs so that Apple apps will simply bypass them.

Google CEO apologises for document outlining how to counter new EU rules by attacking rulemaker, EU’s Breton warns internet is not Wild West

Alphabet GOOGL.O CEO Sundar Pichai has apologised to Europe’s industry chief Thierry Breton over a leaked internal document proposing tactics to counter the EU’s tough new rules on internet companies and lobby against the EU commissioner.

[…]

The call came after a Google internal document outlined a 60-day strategy to attack the European Union’s push for the new rules by getting U.S. allies to push back against Breton.

[…]

The incident underlines the intense lobbying by tech companies against the proposed EU rules, which could impede their businesses and force changes in how they operate.

Breton also warned Pichai about the excesses of the internet.

“The Internet cannot remain a ‘Wild West’: we need clear and transparent rules, a predictable environment and balanced rights and obligations,” he told Pichai.

Breton will announce new draft rules known as the Digital Services Act and the Digital Markets Act together with European Competition Commissioner Margrethe Vestager on Dec. 2.

The rules will set out a list of do’s and don’ts for gatekeepers – online companies with market power – forcing them to share data with rivals and regulators and not to promote their services and products unfairly.

EU antitrust chief Margrethe Vestager has levied fines totalling 8.25 billion euros ($9.7 billion) against Google in the past three years for abusing its market power to favour its shopping comparison service, its Android mobile operating system and its advertising business.

Breton told Pichai that he would increase the EU’s power to curb unfair behaviour by gatekeeping platforms, so that the Internet does not just benefit a handful of companies but also Europe’s small- and medium-sized enterprises and entrepreneurs.

Source: Google CEO apologises for document, EU’s Breton warns internet is not Wild West | Reuters

Mozilla *privacy not included tech buyers guide rated on creepy scale

This is a list of 130 Smart home gadgets, fitness trackers, toys and more, rated for their privacy & security. It’s a large list and shows you how basically anything by big tech is pretty creepy – anything by Amazon and Facebook is super creepy, Google pretty creepy, Apple only creepy. There are a few surprises, like Moleskine being super creepy. Fitness machinery is pretty bad as are some coffee makers… Nintendo Switches and PS5s (surprisingly) aren’t creepy at all…

Source: Mozilla – *privacy not included