Dutch phones can be easily tracked online: ‘Extreme security risk’

a map of the netherlands with cellphone towers

BNR received more than 80 gigabytes of location data from data traders: the coordinates of millions of telephones, often registered dozens of times a day.

The gigantic mountain of data also includes movements of people with functions in which safety plays an important role. A senior army officer could be followed as he drove from his home in the Randstad to various military locations in the country. A destination he often visited was the Frederikazerne, headquarters of the Military Intelligence and Security Service (MIVD). The soldier confirmed the authenticity of the data to BNR by telephone.

[…]

The data also reveals the home address of someone who often visits the Penitentiary in Vught, where terrorists and serious criminals are imprisoned. A spokesperson for the Judicial Institutions Agency (DJI) confirmed that the person, who according to the Land Registry lives at this address, had actually brought a mobile phone onto the premises with permission and stated that the matter was being investigated.

These are just examples, the list of potential targets is long: up to 1,200 phones in the dataset visited the office in Zoetermeer where the National Police, National Public Prosecutor’s Office and Europol are located. Up to 70 telephones are registered in the King’s residential palace, Huis ten Bosch. At the Volkel Air Base, a storage point for nuclear weapons, up to 370 telephones were counted. The National Police’s management says it is aware of the problem and is ‘looking internally to see what measures are appropriate to combat this’.

‘National security implications’

BNR had two experts inspect the dataset. “This is an extreme security risk, with possible implications for national security,” says Ralph Moonen, technical director of Secura. “It’s really shocking that this can happen like this,” says Sjoerd van der Meulen, cybersecurity specialist at DataExpert.

The technology used to track mobile phones is designed for use by advertisers, but is suitable for other purposes, says Paul Pols, former technical advisor to the Assessment Committee for the Use of Powers, which supervises the intelligence services. According to Pols, it is known that the MIVD and AIVD also purchase access to this type of data on the data market under the heading ‘open sources’. “What is striking about this case is that you can easily access large amounts of data from Dutch citizens,” said the cybersecurity expert.

For sale via an online marketplace in Berlin

That access was achieved through an online marketplace based in Berlin. On this platform, Datarade.ai, hundreds of companies offer personal data for sale. In addition to location data, medical information and credit scores are also available.

Following a tip from a data subject, BNR responded to an advertisement offering location data of Dutch users. A sales employee of the platform then contacted two medium-sized providers: Datastream Group from Florida in the US and Factori.ai from Singapore – both companies have fewer than 50 employees, according to their LinkedIn pages.

Datastream and Factori offer similar services: a subscription to the location data of mobile phones in the Netherlands is available for prices starting from $2,000 per month. Those who pay more can receive fresh data every 24 hours via the cloud, possibly even from all over the world.

[…]

Upon request, BNR was therefore sent a full month of historical data from Dutch telephones. This data was anonymized – it did not contain telephone numbers. Individual phones can be recognized by unique number combinations, a ‘mobile advertising ID’ used by Apple and Google to show individual users relevant advertisements within the limits of European privacy legislation.

Possibly four million Dutch victims of tracking

The precise origin of the data traded online is unclear. According to the providers, these come from apps that have received permission from users to use location data. This includes fitness or navigation apps that sell data. This is how the data ultimately ends up at Factori and Datastream. By combining data from multiple sources, gigantic files are created.

[…]

it is not difficult to recognize the owners of individual phones in the data. By linking sleeping places to data from public registers, such as the Land Registry, and workplaces to LinkedIn profiles, BNR was able to identify, in addition to the army officer, a project manager from Alphen aan den Rijn and an amateur football referee. The discovery that he had been digitally stalked for at least a month led to shocked reactions. ‘Bizarre’, and: ‘I immediately turned off ‘sharing location data’ on my phone’.

Trade is prohibited, but the government does not act

Datarade, the Berlin data marketplace, informed BNR in an email that traders on their platform are ‘fully liable’ for the data they offer. Illegal practices can be reported using an online form. The spokesperson for the German company leaves open the question of whether measures are being taken against the sale of location data.

[…]

Source (Google Translate): Dutch phones can be secretly tracked online: ‘Extreme security risk’ | BNR News Radio

Source (Dutch original): Nederlandse telefoons online stiekem te volgen: ‘Extreem veiligheidsrisico’

Drivers would prefer to buy a low-tech car than one that shares their data

According to a survey of 2,000 Americans conducted by Kaspersky in November and published this week, 72 percent of drivers are uncomfortable with automakers sharing their data with advertisers, insurance companies, subscription services, and other third-party outfits. Specifically, 37.3 percent of those polled are “very uncomfortable” with this data sharing, and 34.5 percent are “somewhat uncomfortable.”

However, only 28 percent of the total respondents say they have any idea what kind of data their car is collecting. Spoiler alert: It’s potentially all the data. An earlier Mozilla Foundation investigation, which assessed the privacy policies and practices of 25 automakers, gave every single one a failing grade.

In Moz’s September Privacy Not Included report, the org warned that car manufacturers aren’t only potentially collecting and selling things like location history, driving habits and in-car browser histories. Some connected cars may also track drivers’ sexual activity, immigration status, race, facial expressions, weight, health, and even genetic information, if that information becomes available.

Back to the Kaspersky survey: 87 percent said automakers should be required to delete their data upon request. Depending on where you live, and thus the privacy law you’re under, the manufacturers may be obligated to do so.

Oddly, while motorists are worried about their cars sharing their data with third parties, they don’t seem that concerned about their vehicles snooping on them in the first place.

Less than half (41.8 percent) of respondents said they are worried about their vehicle’s sensors, infotainment system, cameras, microphones, and other connected apps and services might be collecting their personal data. And 80 percent of respondents pair their phone with their car anyway, allowing data and details of activities to be exchanged between apps and the vehicle and potentially its manufacturer.

This echoes another survey published this week that found many drivers are willing to trade their personal data and privacy for driver personalization — things like seat, mirror, and entertainment preferences (43 percent) — and better insurance rates (67 percent).

The study also surveyed 2,000 American drivers to come up with these numbers and found that while most drivers (68 percent) don’t mind automakers collecting their personal data, only five percent believe this surveillance should be unrestricted, and 63 percent said it should be on an opt-in basis.

Perhaps it’s time for vehicle makers to take note

Source: Surveyed drivers prefer low-tech cars over data-sharing ones • The Register

Also, we want buttons back too please.

Generative AI Will Be A Huge Boon For The Public Domain, Unless Copyright Blocks It

two people holding hands watching a pc screen. On the screen is a robot painting a digitised Bob Ross paintingA year ago, I noted that many of Walled Culture’s illustrations were being produced using generative AI. During that time, AI has developed rapidly. For example, in the field of images, OpenAI has introduced DALL-E 3 in ChatGPT:

When prompted with an idea, ChatGPT will automatically generate tailored, detailed prompts for DALL·E 3 that bring your idea to life. If you like a particular image, but it’s not quite right, you can ask ChatGPT to make tweaks with just a few words.

Ars Technica has written a good intro to the new DALL-E 3, describing it as “a wake-up call for visual artists” in terms of its advanced capabilities. The article naturally touches on the current situation regarding copyright for these creations:

In the United States, purely AI-generated art cannot currently be copyrighted and exists in the public domain. It’s not cut and dried, though, because the US Copyright Office has supported the idea of allowing copyright protection for AI-generated artwork that has been appreciably altered by humans or incorporated into a larger work.

The article goes on to explore an interesting aspect of that situation:

there’s suddenly a huge new pool of public domain media to work with, and it’s often “open source”—as in, many people share the prompts and recipes used to create the artworks so that others can replicate and build on them. That spirit of sharing has been behind the popularity of the Midjourney community on Discord, for example, where people typically freely see each other’s prompts.

When several mesmerizing AI-generated spiral images went viral in September, the AI art community on Reddit quickly built off of the trend since the originator detailed his workflow publicly. People created their own variations and simplified the tools used in creating the optical illusions. It was a good example of what the future of an “open source creative media” or “open source generative media” landscape might look like (to play with a few terms).

There are two important points there. First, that the current, admittedly tentative, status of generative AI creations as being outside the copyright system means that many of them, perhaps most, are available for anyone to use in any way. Generative AI could drive a massive expansion of the public domain, acting as a welcome antidote to constant attempts to enclose the public domain by re-imposing copyright on older works – for example, as attempted by galleries and museums.

The second point is that without the shackles of copyright, these creations can form the basis of collaborative works among artists willing to embrace that approach, and to work with this new technology in new ways. That’s a really exciting possibility that has been hard to implement without recourse to legal approaches like Creative Commons. Although the intention there is laudable, most people don’t really want to worry about the finer points of licensing – not least out of fear that they might get it wrong, and be sued by the famously litigious copyright industry.

A situation in which generative AI creations are unequivocally in the public domain could unleash a flood of pent-up creativity. Unfortunately, as the Ars Technica article rightly points out, the status of AI generated artworks is already slightly unclear. We can expect the copyright world to push hard to exploit that opening, and to demand that everything created by computers should be locked down under copyright for decades, just as human inspiration generally is from the moment it is in a fixed form. Artists should enjoy this new freedom to explore and build on generative AI images while they can – it may not last.

Source: Generative AI Will Be A Huge Boon For The Public Domain, Unless Copyright Blocks It | Techdirt

The NY Times Lawsuit Against OpenAI Would Open Up The NY Times To All Sorts Of Lawsuits Should It Win, shows that if you feed it a URL it can regurgitate what’s on the first parts of that URL

This week the NY Times somehow broke the story of… well, the NY Times suing OpenAI and Microsoft. I wonder who tipped them off. Anyhoo, the lawsuit in many ways is similar to some of the over a dozen lawsuits filed by copyright holders against AI companies. We’ve written about how silly many of these lawsuits are, in that they appear to be written by people who don’t much understand copyright law. And, as we noted, even if courts actually decide in favor of the copyright holders, it’s not like it will turn into any major windfall. All it will do is create another corruptible collection point, while locking in only a few large AI companies who can afford to pay up.

I’ve seen some people arguing that the NY Times lawsuit is somehow “stronger” and more effective than the others, but I honestly don’t see that. Indeed, the NY Times itself seems to think its case is so similar to the ridiculously bad Authors Guild case, that it’s looking to combine the cases.

But while there are some unique aspects to the NY Times case, I’m not sure they are nearly as compelling as the NY Times and its supporters think they are. Indeed, I think if the Times actually wins its case, it would open the Times itself up to some fairly damning lawsuits itself, given its somewhat infamous journalistic practices regarding summarizing other people’s articles without credit. But, we’ll get there.

The Times, in typical NY Times fashion, presents this case as thought the NY Times is the great defender of press freedom, taking this stand to stop the evil interlopers of AI.

Independent journalism is vital to our democracy. It is also increasingly rare and valuable. For more than 170 years, The Times has given the world deeply reported, expert, independent journalism. Times journalists go where the story is, often at great risk and cost, to inform the public about important and pressing issues. They bear witness to conflict and disasters, provide accountability for the use of power, and illuminate truths that would otherwise go unseen. Their essential work is made possible through the efforts of a large and expensive organization that provides legal, security, and operational support, as well as editors who ensure their journalism meets the highest standards of accuracy and fairness. This work has always been important. But within a damaged information ecosystem that is awash in unreliable content, The Times’s journalism provides a service that has grown even more valuable to the public by supplying trustworthy information, news analysis, and commentary

Defendants’ unlawful use of The Times’s work to create artificial intelligence products that compete with it threatens The Times’s ability to provide that service. Defendants’ generative artificial intelligence (“GenAI”) tools rely on large-language models (“LLMs”) that were built by copying and using millions of The Times’s copyrighted news articles, in-depth investigations, opinion pieces, reviews, how-to guides, and more. While Defendants engaged in widescale copying from many sources, they gave Times content particular emphasis when building their LLMs—revealing a preference that recognizes the value of those works. Through Microsoft’s Bing Chat (recently rebranded as “Copilot”) and OpenAI’s ChatGPT, Defendants seek to free-ride on The Times’s massive investment in its journalism by using it to build substitutive products without permission or payment.

As the lawsuit makes clear, this isn’t some high and mighty fight for journalism. It’s a negotiating ploy. The Times admits that it has been trying to get OpenAI to cough up some cash for its training:

For months, The Times has attempted to reach a negotiated agreement with Defendants, in accordance with its history of working productively with large technology platforms to permit the use of its content in new digital products (including the news products developed by Google, Meta, and Apple). The Times’s goal during these negotiations was to ensure it received fair value for the use of its content, facilitate the continuation of a healthy news ecosystem, and help develop GenAI technology in a responsible way that benefits society and supports a well-informed public.

I’m guessing that OpenAI’s decision a few weeks back to pay off media giant Axel Springer to avoid one of these lawsuits, and the failure to negotiate a similar deal (at what is likely a much higher price), resulted in the Times moving forward with the lawsuit.

There are five or six whole pages of puffery about how amazing the NY Times thinks the NY Times is, followed by the laughably stupid claim that generative AI “threatens” the kind of journalism the NY Times produces.

Let me let you in on a little secret: if you think that generative AI can do serious journalism better than a massive organization with a huge number of reporters, then, um, you deserve to go out of business. For all the puffery about the amazing work of the NY Times, this seems to suggest that it can easily be replaced by an auto-complete machine.

In the end, though, the crux of this lawsuit is the same as all the others. It’s a false belief that reading something (whether by human or machine) somehow implicates copyright. This is false. If the courts (or the legislature) decide otherwise, it would upset pretty much all of the history of copyright and create some significant real world problems.

Part of the Times complaint is that OpenAI’s GPT LLM was trained in part with Common Crawl data. Common Crawl is an incredibly useful and important resource that apparently is now coming under attack. It has been building an open repository of the web for people to use, not unlike the Internet Archive, but with a focus on making it accessible to researchers and innovators. Common Crawl is a fantastic resource run by some great people (though the lawsuit here attacks them).

But, again, this is the nature of the internet. It’s why things like Google’s cache and the Internet Archive’s Wayback Machine are so important. These are archives of history that are incredibly important, and have historically been protected by fair use, which the Times is now threatening.

(Notably, just recently, the NY Times was able to get all of its articles excluded from Common Crawl. Otherwise I imagine that they would be a defendant in this case as well).

Either way, so much of the lawsuit is claiming that GPT learning from this data is infringement. And, as we’ve noted repeatedly, reading/processing data is not a right limited by copyright. We’ve already seen this in multiple lawsuits, but this rush of plaintiffs is hoping that maybe judges will be wowed by this newfangled “generative AI” technology into ignoring the basics of copyright law and pretending that there are now rights that simply do not exist.

Now, the one element that appears different in the Times’ lawsuit is that it has a bunch of exhibits that purport to prove how GPT regurgitates Times articles. Exhibit J is getting plenty of attention here, as the NY Times demonstrates how it was able to prompt ChatGPT in such a manner that it basically provided them with direct copies of NY Times articles.

In the complaint, they show this:

Image

At first glance that might look damning. But it’s a lot less damning when you look at the actual prompt in Exhibit J and realize what happened, and how generative AI actually works.

What the Times did is prompt GPT-4 by (1) giving it the URL of the story and then (2) “prompting” it by giving it the headline of the article and the first seven and a half paragraphs of the article, and asking it to continue.

Here’s how the Times describes this:

Each example focuses on a single news article. Examples were produced by breaking the article into two parts. The frst part o f the article is given to GPT-4, and GPT-4 replies by writing its own version of the remainder of the article.

Here’s how it appears in Exhibit J (notably, the prompt was left out of the complaint itself):

Image

If you actually understand how these systems work, the output looking very similar to the original NY Times piece is not so surprising. When you prompt a generative AI system like GPT, you’re giving it a bunch of parameters, which act as conditions and limits on its output. From those constraints, it’s trying to generate the most likely next part of the response. But, by providing it paragraphs upon paragraphs of these articles, the NY Times has effectively constrained GPT to the point that the most probabilistic responses is… very close to the NY Times’ original story.

In other words, by constraining GPT to effectively “recreate this article,” GPT has a very small data set to work off of, meaning that the highest likelihood outcome is going to sound remarkably like the original. If you were to create a much shorter prompt, or introduce further randomness into the process, you’d get a much more random output. But these kinds of prompts effectively tell GPT not to do anything BUT write the same article.

From there, though, the lawsuit gets dumber.

It shows that you can sorta get around the NY Times’ paywall in the most inefficient and unreliable way possible by asking ChatGPT to quote the first few paragraphs in one paragraph chunks.

Image

Of course, quoting individual paragraphs from a news article is almost certainly fair use. And, for what it’s worth, the Times itself admits that this process doesn’t actually return the full article, but a paraphrase of it.

And the lawsuit seems to suggest that merely summarizing articles is itself infringing:

Image

That’s… all factual information summarizing the review? And while the complaint shows that if you then ask for (again, paragraph length) quotes, GPT will give you a few quotes from the article.

And, yes, the complaint literally argues that a generative AI tool can violate copyright when it “summarizes” an article.

The issue here is not so much how GPT is trained, but how the NY Times is constraining the output. That is unrelated to the question of whether or not the reading of these article is fair use or not. The purpose of these LLMs is not to repeat the content that is scanned, but to figure out the probabilistic most likely next token for a given prompt. When the Times constrains the prompts in such a way that the data set is basically one article and one article only… well… that’s what you get.

Elsewhere, the Times again complains about GPT returning factual information that is not subject to copyright law.

Image

But, I mean, if you were to ask anyone the same question, “What does wirecutter recommend for The Best Kitchen Scale,” they’re likely to return you a similar result, and that’s not infringing. It’s a fact that that scale is the one that it recommends. The Times complains that people who do this prompt will avoid clicking on Wirecutter affiliate links, but… um… it has no right to that affiliate income.

I mean, I’ll admit right here that I often research products and look at Wirecutter (and other!) reviews before eventually shopping independently of that research. In other words, I will frequently buy products after reading the recommendations on Wirecutter, but without clicking on an affiliate link. Is the NY Times really trying to suggest that this violates its copyright? Because that’s crazy.

Meanwhile, it’s not clear if the NY Times is mad that it’s accurately recommending stuff or if it’s just… mad. Because later in the complaint, the NY Times says its bad that sometimes GPT recommends the wrong product or makes up a paragraph.

So… the complaint is both that GPT reproduces things too accurately, AND not accurately enough. Which is it?

Anyway, the larger point is that if the NY Times wins, well… the NY Times might find itself on the receiving end of some lawsuits. The NY Times is somewhat infamous in the news world for using other journalists’ work as a starting point and building off of it (frequently without any credit at all). Sometimes this results in an eventual correction, but often it does not.

If the NY Times successfully argues that reading a third party article to help its reporters “learn” about the news before reporting their own version of it is copyright infringement, it might not like how that is turned around by tons of other news organizations against the NY Times. Because I don’t see how there’s any legitimate distinction between OpenAI scanning NY Times articles and NY Times reporters scanning other articles/books/research without first licensing those works as well.

Or, say, what happens if a source for a NY TImes reporter provides them with some copyright-covered work (an article, a book, a photograph, who knows what) that the NY Times does not have a license for? Can the NY Times journalist then produce an article based on that material (along with other research, though much less than OpenAI used in training GPT)?

It seems like (and this happens all too often in the news industry) the NY Times is arguing that it’s okay for its journalists to do this kind of thing because it’s in the business of producing Important Journalism™ whereas anyone else doing the same thing is some damn interloper.

We see this with other copyright disputes and the media industry, or with the ridiculous fight over the hot news doctrine, in which news orgs claimed that they should be the only ones allowed to report on something for a while.

Similarly, I’ll note that even if the NY Times gets some money out of this, don’t expect the actual reporters to see any of it. Remember, this is the same NY Times that once tried to stiff freelance reporters by relicensing their articles to electronic databases without paying them. The Supreme Court didn’t like that. If the NY Times establishes that merely training AI on old articles is a licenseable, copyright-impacting event, will it go back and pay those reporters a piece of whatever change they get? Or nah?

Source: The NY Times Lawsuit Against OpenAI Would Open Up The NY Times To All Sorts Of Lawsuits Should It Win | Techdirt

Google agrees to settle $5 billion lawsuit accusing it of tracking Incognito users

In 2020, Google was hit with a lawsuit that accused it of tracking Chrome users’ activities even when they were using Incognito mode. Now, after a failed attempt to get it dismissed, the company has agreed to settle the complaint that originally sought $5 billion in damages. According to Reuters and The Washington Post, neither side has made the details of the settlement public, but they’ve already agreed to the terms that they’re presenting to the court for approval in February.

When the plaintiffs filed the lawsuit, they said Google used tools like its Analytics product, apps and browser plug-ins to monitor users. They reasoned that by tracking someone on Incognito, the company was falsely making people believe that they could control the information that they were willing to share with it. At the time, a Google spokesperson said that while Incognito mode doesn’t save a user’s activity on their device, websites could still collect their information during the session.

The lawsuit’s plaintiffs presented internal emails that allegedly showed conversations between Google execs proving that the company monitored Incognito browser usage to sell ads and track web traffic. Their complaint accused Google of violating federal wire-tapping and California privacy laws and was asking up to $5,000 per affected user. They claimed that millions of people who’d been using Incognito since 2016 had likely been affected, which explains the massive damages they were seeking from the company. Google has likely agreed to settle for an amount lower than $5 billion, but it has yet to reveal details about the agreement and has yet to get back to Engadget with an official statement.

Source: Google agrees to settle $5 billion lawsuit accusing it of tracking Incognito users

New York Times Sues OpenAI and Microsoft Over Reading Publicly Available Information

The New York Times sued OpenAI and Microsoft for copyright infringement on Wednesday, opening a new front in the increasingly intense legal battle over the unauthorized use of published work to train artificial intelligence technologies.

The Times is the first major American media organization to sue the companies, the creators of ChatGPT and other popular A.I. platforms, over copyright issues associated with its written works. The lawsuit, filed in Federal District Court in Manhattan, contends that millions of articles published by The Times were used to train automated chatbots that now compete with the news outlet as a source of reliable information.

The suit does not include an exact monetary demand. But it says the defendants should be held responsible for “billions of dollars in statutory and actual damages” related to the “unlawful copying and use of The Times’s uniquely valuable works.” It also calls for the companies to destroy any chatbot models and training data that use copyrighted material from The Times.

In its complaint, The Times said it approached Microsoft and OpenAI in April to raise concerns about the use of its intellectual property and explore “an amicable resolution,” possibly involving a commercial agreement and “technological guardrails” around generative A.I. products. But it said the talks had not produced a resolution.

An OpenAI spokeswoman, Lindsey Held, said in a statement that the company had been “moving forward constructively” in conversations with The Times and that it was “surprised and disappointed” by the lawsuit.

“We respect the rights of content creators and owners and are committed to working with them to ensure they benefit from A.I. technology and new revenue models,” Ms. Held said. “We’re hopeful that we will find a mutually beneficial way to work together, as we are doing with many other publishers.”

[…]

Source: New York Times Sues OpenAI and Microsoft Over Use of Copyrighted Work – The New York Times

Well, if they didn’t want anyone to read it – which is really what an AI is doing, just as much as you or I do – then they should have put the content behind a paywall.

Verizon Once Again Busted Handing Out Sensitive Wireless Subscriber Information To Any Nitwit Who Asks For It – because no US enforcement of any kind

Half a decade ago we documented how the U.S. wireless industry was caught over-collecting sensitive user location and vast troves of behavioral data, then selling access to that data to pretty much anybody with a couple of nickels to rub together. It resulted in no limit of abuse from everybody from stalkers to law enforcement — and even to people pretending to be law enforcement.

While the FCC purportedly moved to fine wireless companies for this behavior, the agency still hasn’t followed through. Despite the obvious ramifications of this kind of behavior during a post-Roe, authoritarian era.

Nearly a decade later, and it’s still a very obvious problem. The folks over at 404 Media have documented the case of a stalker who managed to game Verizon in order to obtain sensitive data about his target, including her address, location data, and call logs.

Her stalker posed as a police officer (badly) and, as usual, Verizon did virtually nothing to verify his identity:

“Glauner’s alleged scheme was not sophisticated in the slightest: he used a ProtonMail account, not a government email, to make the request, and used the name of a police officer that didn’t actually work for the police department he impersonated, according to court records. Despite those red flags, Verizon still provided the sensitive data to Glauner.”

In this case, the stalker found it relatively trivial to take advantage of Verizon Security Assistance and Court Order Compliance Team (or VSAT CCT), which verifies law enforcement requests for data. You’d think that after a decade of very ugly scandals on this front Verizon would have more meaningful safeguards in place, but you’d apparently be wrong.

Keep in mind: the FCC tried to impose some fairly basic privacy rules for broadband and wireless in 2016, but the telecom industry, in perfect lockstep with Republicans, killed those efforts before they could take effect, claiming they’d be too harmful for the super competitive and innovative (read: not competitive or innovative at all) U.S. broadband industry.

[…]

Source: Verizon Once Again Busted Handing Out Sensitive Wireless Subscriber Information To Any Nitwit Who Asks For It | Techdirt

UK Police to be able to run AI face recognition searches on all driving licence holders

The police will be able to run facial recognition searches on a database containing images of Britain’s 50 million driving licence holders under a law change being quietly introduced by the government.

Should the police wish to put a name to an image collected on CCTV, or shared on social media, the legislation would provide them with the powers to search driving licence records for a match.

The move, contained in a single clause in a new criminal justice bill, could put every driver in the country in a permanent police lineup, according to privacy campaigners.

[…]

The intention to allow the police or the National Crime Agency (NCA) to exploit the UK’s driving licence records is not explicitly referenced in the bill or in its explanatory notes, raising criticism from leading academics that the government is “sneaking it under the radar”.

Once the criminal justice bill is enacted, the home secretary, James Cleverly, must establish “driver information regulations” to enable the searches, but he will need only to consult police bodies, according to the bill.

Critics claim facial recognition technology poses a threat to the rights of individuals to privacy, freedom of expression, non-discrimination and freedom of assembly and association.

Police are increasingly using live facial recognition, which compares a live camera feed of faces against a database of known identities, at major public events such as protests.

Prof Peter Fussey, a former independent reviewer of the Met’s use of facial recognition, said there was insufficient oversight of the use of facial recognition systems, with ministers worryingly silent over studies that showed the technology was prone to falsely identifying black and Asian faces.

[…]

The EU had considered making images on its member states’ driving licence records available on the Prüm crime fighting database. The proposal was dropped earlier this year as it was said to represent a disproportionate breach of privacy.

[…]

Carole McCartney, a professor of law and criminal justice at the University of Leicester, said the lack of consultation over the change in law raised questions over the legitimacy of the new powers.

She said: “This is another slide down the ‘slippery slope’ of allowing police access to whatever data they so choose – with little or no safeguards. Where is the public debate? How is this legitimate if the public don’t accept the use of the DVLA and passport databases in this way?”

The government scrapped the role of the commissioner for the retention and use of biometric material and the office of surveillance camera commissioner this summer, leaving ministers without an independent watchdog to scrutinise such legislative changes.

[…]

In 2020, the court of appeal ruled that South Wales police’s use of facial recognition technology had breached privacy rights, data protection laws and equality laws, given the risk the technology could have a race or gender bias.

The force has continued to use the technology. Live facial recognition is to be deployed to find a match of people attending Christmas markets this year against a watchlist.

Katy Watts, a lawyer at the civil rights advocacy group Liberty said: “This is a shortcut to widespread surveillance by the state and we should all be worried by it.”

Source: Police to be able to run face recognition searches on 50m driving licence holders | Facial recognition | The Guardian

Slovakian PM wants to kill EU anti-corruption policing

Prime Minister Robert Fico’s push dissolve the body that now oversees high-profile corruption cases poses a risk to the EU’s financial interests and would harm the work of the European Public Prosecutor’s Office, Juraj Novocký, Slovakia’s representative to the EU body, told Euractiv Slovakia.

Fico’s government wants to pass a reform that would eliminate the Special Anti-Corruption Prosecutor’s Office, reduce penalties, including those for corruption, and curtail the rights of whistleblowers.

Novocký points out that the reform would also bring a radical shortening of limitation periods: “Through a thorough analysis, we have found that if the amendment is adopted as proposed, we will have to stop prosecution in at least twenty cases for this reason,” Novocký of the European Public Prosecutor’s Office (EPPO) told Euractiv Slovakia.

“This has a concrete effect on the EPPO’s activities and indirectly on the protection of the financial interests of the EU because, in such cases, there will be no compensation for the damage caused,” Novocký added.

On Monday, EU Chief Prosecutor Laura Kövesi addressed the government’s push for reform in a letter to the European Commission, concluding that it constitutes a serious risk of breaching the rule of law in the meaning of Article 4(2)(c) of the Conditionality Regulation.

[…]

Source: Fico’s corruption reforms may block investigations in 20 EU fraud cases – EURACTIV.com

AI cannot be patent ‘inventor’, UK Supreme Court rules in landmark case – but a company can

A U.S. computer scientist on Wednesday lost his bid to register patents over inventions created by his artificial intelligence system in a landmark case in Britain about whether AI can own patent rights.

Stephen Thaler wanted to be granted two patents in the UK for inventions he says were devised by his “creativity machine” called DABUS.

His attempt to register the patents was refused by the UK’s Intellectual Property Office (IPO) on the grounds that the inventor must be a human or a company, rather than a machine.

Thaler appealed to the UK’s Supreme Court, which on Wednesday unanimously rejected his appeal as under UK patent law “an inventor must be a natural person”.

Judge David Kitchin said in the court’s written ruling that the case was “not concerned with the broader question whether technical advances generated by machines acting autonomously and powered by AI should be patentable”.

Thaler’s lawyers said in a statement that the ruling “establishes that UK patent law is currently wholly unsuitable for protecting inventions generated autonomously by AI machines and as a consequence wholly inadequate in supporting any industry that relies on AI in the development of new technologies”.

‘LEGITIMATE QUESTIONS’

A spokesperson for the IPO welcomed the decision “and the clarification it gives as to the law as it stands in relation to the patenting of creations of artificial intelligence machines”.

They added that there are “legitimate questions as to how the patent system and indeed intellectual property more broadly should handle such creations” and the government will keep this area of law under review.

[…]

“The judgment does not preclude a person using an AI to devise an invention – in such a scenario, it would be possible to apply for a patent provided that person is identified as the inventor.”

In a separate case last month, London’s High Court ruled that artificial neural networks can attract patent protection under UK law.

Source: AI cannot be patent ‘inventor’, UK Supreme Court rules in landmark case | Reuters

Somehow it sits strangely that a company can be a ‘natural person’ but an AI cannot.

AI Act: French govt accused of being influenced by lobbyist with conflict of interests by senators in the pockets of copyright giants. Which surprises no-one watching the AI act process.

French senators criticised the government’s stance in the AI Act negotiations, particularly a lack of copyright protection and the influence of a lobbyist with alleged conflicts of interests, former digital state secretary Cédric O.

The EU AI Act is set to become the world’s first regulation of artificial intelligence. Since the emergence of AI models, such as GPT-4, used by the AI system ChatGPT, EU policymakers have been working on regulating these powerful “foundation” models.

“We know that Cédric O and Mistral influenced the French government’s position regarding the AI regulation bill of the European Commission, attempting to weaken it”, said Catherine Morin-Desailly, a centrist senator at the during the government’s question time on Wednesday (20 December).

“The press reported on the spectacular enrichment of the former digital minister, Cédric O. He entered the company Mistral, where the interests of American companies and investment funds are prominently represented. This financial operation is causing shock within the Intergovernmental Committee on AI you have established, Madam Prime Minister,” she continued.

The accusations were vehemently denied by the incumbent Digital Minister Jean-Noël Barrot: “It is the High Authority for Transparency in Public Life that ensures the absence of conflicts of interest among former government members.”

Moreover, Barrot denied the allegations that France has been the spokesperson of private interests, arguing that the government: “listened to all stakeholders as it is customary and relied solely on the general interest as our guiding principle.”

[…]

Barrot was criticised in a Senate hearing earlier the same day by Pascal Rogard, director of  the Society of Dramatic Authors and Composers, who said that “for the first time, France, through the medium of Jean-Noël Barrot […] has neither supported culture, the creation industry, or copyrights.”

Morin-Desailly then said that she questioned the French stance on AI, which, in her view, is aligned with the position of US big tech companies.

Drawing a parallel from the position of big tech on this copyright AI debate and the Directive on Copyright in the Digital Single Market, Rogard said that since it was enforced he did not “observed any damage to the [big tech]’s business activities.”

[…]

“Trouble was stirred by the renowned Cédric O, who sits on the AI Intergovernmental Committee and still wields a lot of influence, notably with the President of the Republic”, stated Morin-Desailly earlier the same day at the Senate hearing with Rogard. Other sitting Senators joined Morin-Desailly in criticising the French position, and O.

Looking at O’s influential position in the government, the High Authority for Transparency in Public Life decided to forbid O for a three-year time-span to lobby the government or own shares within companies of the tech sector.

Yet, according to Capital, O bought shares through his consulting agency in Mistral AI. Capital revealed O invested €176.1, which is now valued at €23 million, thanks to the company’s last investment round in December.

Moreover, since September, O has at the Committee on generative artificial intelligence to advise the government on its position towards AI.

[…]

 

Source: AI Act: French government accused of being influenced by lobbyist with conflict of interests

The UK Government Should Not Let Copyright Stifle AI Innovation

As Walled Culture has often noted, the process of framing new copyright laws is tilted against the public in multiple ways. And on the rare occasions when a government makes some mild concession to anyone outside the copyright industry, the latter invariably rolls out its highly-effective lobbying machine to fight against such measures. It’s happening again in the world of AI. A post on the Knowledge Rights 21 site points to:

a U-turn by the British Government in February 2023, abandoning its prior commitment to introduce a broad copyright exception for text and data mining that would not have made an artificial distinction between non-commercial and commercial uses. Given that applied research so often bridges these two, treating them differently risks simply chilling innovative knowledge transfer and public institutions working with the private sector.

Unfortunately, and in the face of significant lobbying from the creative industries (something we see also in WashingtonTokyo and Brussels), the UK government moved away from clarifying language to support the development of AI in the UK.

In an attempt to undo some of the damage caused by the UK government’s retrograde move, a broad range of organizations, including Knowledge Rights 21, Creative Commons, and Wikimedia UK, have issued a public statement calling on the UK government to safeguard AI innovation as it draws up its new code of practice on copyright and AI. The statement points out that copyright is a serious threat to the development of AI in the UK, and that:

Whilst questions have arisen in the past which consider copyright implications in relation to new technologies, this is the first time that such debate risks entirely halting the development of a new technology.

The statement’s key point is as follows:

AI relies on analysing large amounts of data. Large-scale machine learning, in particular, must be trained on vast amounts of data in order to function correctly, safely and without bias. Safety is critical, as highlighted in the [recently agreed] Bletchley Declaration. In order to achieve the necessary scale, AI developers need to be able to use the data they have lawful access to, such as data that is made freely available to view on the open web or to which they already have access to by agreement.

Any restriction on the use of such data or disproportionate legal requirements will negatively impact on the development of AI, not only inhibiting the development of large-scale AI in the UK but exacerbating further pre-existing issues caused by unequal access to data.

The organizations behind the statement note that restrictions imposed by copyright would create barriers to entry and raise costs for new entrants. There would also be serious knock-on effects:

Text and data mining techniques are necessary to analyse large volumes of content, often using AI, to detect patterns and generate insights, without needing to manually read everything. Such analysis is regularly needed across all areas of our society and economy, from healthcare to marketing, climate research to finance.

The statement concludes by making a number of recommendations to the UK government in order to ensure that copyright does not stifle the development of AI in the UK. The key ones concern access to the data sets that are vital for training AI and carrying out text and data mining. The organizations ask that the UK’s Code of Practice:

Clarifies that access to broad and varied data sets that are publicly available online remain available for analysis, including text and data mining, without the need for licensing.

Recognises that even without an explicit commercial text and data mining exception, exceptions and limits on copyright law exist that would permit text and data mining for commercial purposes.

Those are pretty minimal demands, but we can be sure that the copyright industry will fight them tooth and nail. For the companies involved, keeping everything involving copyright under their tight control is far more important than nurturing an exciting new technology with potentially huge benefits for everyone.

Source: The UK Government Should Not Let Copyright Stifle AI Innovation | Techdirt

Internet Archive: Digital Lending is Fair Use, Not Copyright Infringement – a library is a library, whether it’s paper or digital

In 2020, publishers Hachette, HarperCollins, John Wiley and Penguin Random House sued the Internet Archive (IA) for copyright infringement, equating its ‘Open Library’ to a pirate site.

IA’s library is a non-profit operation that scans physical books, which can then be lent out to patrons in an ebook format. Patrons can also borrow books that are scanned and digitized in-house, with technical restrictions that prevent copying.

Staying true to the centuries-old library concept, only one patron at a time can rent a digital copy of a physical book for a limited period.

Mass Copyright Infringement or Fair Use?

Not all rightsholders are happy with IA’s scanning and lending activities. The publishers are not against libraries per se, nor do they object to ebook lending, but ‘authorized’ libraries typically obtain an official license or negotiate specific terms. The Internet Archive has no license.

The publishers see IA’s library as a rogue operation that engages in willful mass copyright infringement, directly damaging their bottom line. As such, they want it taken down permanently.

The Internet Archive wholeheartedly disagreed with the copyright infringement allegations; it offers a vital service to the public, the Archive said, as it built its legal defense on protected fair use.

After weighing the arguments from both sides, New York District Court Judge John Koeltl sided with the publishers. In March, the court granted their motion for summary judgment, which effectively means that the library is indeed liable for copyright infringement.

The judgment and associated permanent injunction effectively barred the library from reproducing or distributing digital copies of the ‘covered books’ without permission from rightsholders. These restrictions were subject to an eventual appeal, which was announced shortly thereafter.

Internet Archive Files Appeal Brief

Late last week, IA filed its opening brief at the Second Circuit Court of Appeals, asking it to reverse the lower court’s judgment. The library argues that the court erred by rejecting its fair use defense.

Whether IA has a fair use defense depends on how the four relevant factors are weighed. According to the lower court, these favor the publishers but the library vehemently disagrees. On the contrary, it believes that its service promotes the creation and sharing of knowledge, which is a core purpose of copyright.

“This Court should reverse and hold that IA’s controlled digital lending is fair use. This practice, like traditional library lending, furthers copyright’s goal of promoting public availability of knowledge without harming authors or publishers,” the brief reads.

A fair use analysis has to weigh the interests of both sides. The lower court did so, but IA argues that it reached the wrong conclusions, failing to properly account for the “tremendous public benefits” controlled digital lending offers.

No Competition

One of the key fair use factors at stake is whether IA’s lending program affects (i.e., threatens) the traditional ebook lending market. IA uses expert witnesses to argue that there’s no financial harm and further argues that its service is substantially different from the ebook licensing market.

IA offers access to digital copies of books, which is similar to licensed libraries. However, the non-profit organization argues that its lending program is not a substitute as it offers a fundamentally different service.

“For example, libraries cannot use ebook licenses to build permanent collections. But they can use licensing to easily change the selection of ebooks they offer to adapt to changing interests,” IA writes.

The licensing models make these libraries more flexible. However, they have to rely on the books offered by commercial aggregators and can’t add these digital copies to their archives.

“Controlled digital lending, by contrast, allows libraries to lend only books from their own permanent collections. They can preserve and lend older editions, maintaining an accurate historical record of books as they were printed.

“They can also provide access that does not depend on what Publishers choose to make available. But libraries must own a copy of each book they lend, so they cannot easily swap one book for another when interest or trends change,” IA adds.

Stakes are High

The arguments highlighted here are just a fraction of the 74-page opening brief, which goes into much more detail and ultimately concludes that the district court’s judgment should be reversed.

In a recent blog post, IA founder Brewster Kahle writes that if the lower court’s verdict stands, books can’t be preserved for future generations in digital form, in the same way that paper versions have been archived for centuries.

“This lawsuit is about more than the Internet Archive; it is about the role of all libraries in our digital age. This lawsuit is an attack on a well-established practice used by hundreds of libraries to provide public access to their collections.

“The disastrous lower court decision in this case holds implications far beyond our organization, shaping the future of all libraries in the United States and unfortunately, around the world,” Kahle concludes.

A copy of the Internet Archive’s opening brief, filed at the Second Circuit Court of Appeals, is available here (pdf)

Source: Internet Archive: Digital Lending is Fair Use, Not Copyright Infringement * TorrentFreak

Internet Archive Files Opening Brief In Its Appeal Of Book Publishers’ wanton destruction of it

A few weeks ago, publishing giant Penguin Random House (and, yes, I’m still confused why they didn’t call it Random Penguin House after the merger) announced that it was filing a lawsuit (along with many others) against the state of Iowa for its attempt to ban books in school libraries. In its announcement, Penguin Random House talked up the horrors of trying to limit access to books in schools and libraries:

The First Amendment guarantees the right to read and to be read, and for ideas and viewpoints to be exchanged without unreasonable government interference. By limiting students’ access to books, Iowa violates this core principle of the Constitution.

“Our mission of connecting authors and their stories to readers around the world contributes to the free flow of ideas and perspectives that is a hallmark of American Democracy—and we will always stand by it,” says Nihar Malaviya, CEO, Penguin Random House. “We know that not every book we publish will be for every reader, but we must protect the right for all Americans, including students, parents, caregivers, teachers, and librarians to have equitable access to books, and to continue to decide what they read.” 

That’s a very nice sentiment, and I’m glad that Penguin Random House is stating it, but it rings a little hollow, given that Penguin Random House is among the big publishers suing to shut down the Internet Archive, a huge and incredibly useful digital library that actually has the mission that Penguin Random House’s Nihar Malaviya claims is theirs: connecting authors and their stories to readers around the world, while contributing to the free flow of ideas and perspectives that are important to the world. And, believing in the importance of equitable access to books.

So, then, why is Penguin Random House trying to kill the Internet Archive?

While we knew this was coming, last week, the Internet Archive filed its opening brief before the 2nd Circuit appeals court to try to overturn the tragically terrible district court ruling by Judge John Koeltl. The filing is worth reading:

Publishers claim this public service is actually copyright infringement. They ask this Court to elevate form over substance by drawing an artificial line between physical lending and controlled digital lending. But the two are substantively the same, and both serve copyright’s purposes. Traditionally, libraries own print books and can lend each copy to one person at a time, enabling many people to read the same book in succession. Through interlibrary loans, libraries also share books with other libraries’ patrons. Everyone agrees these practices are not copyright infringement.

Controlled digital lending applies the same principles, while creating new means to support education, research, and cultural participation. Under this approach, a library that owns a print book can scan it and lend the digital copy instead of the physical one. Crucially, a library can loan at any one time only the number of print copies it owns, using technological safeguards to prevent copying, restrict access, and limit the length of loan periods.

Lending within these limits aligns digital lending with traditional library lending and fundamentally distinguishes it from simply scanning books and uploading them for anyone to read or redistribute at will. Controlled digital lending serves libraries’ mission of supporting research and education by preserving and enabling access to a digital record of books precisely as they exist in print. And it serves the public by enabling better and more efficient access to library books, e.g., for rural residents with distant libraries, for elderly people and others with mobility or transportation limitations, and for people with disabilities that make holding or reading print books difficult. At the same time, because controlled digital lending is limited by the same principles inherent in traditional lending, its impact on authors and publishers is no different from what they have experienced for as long as libraries have existed.

The filing makes the case that the Internet Archives use of controlled digital lending for eBooks is protected by fair use, leaning heavily on the idea that there is no evidence of harm to the copyright holders:

First, the purpose and character of the use favor fair use because IA’s controlled digital lending is noncommercial, transformative, and justified by copyright’s purposes. IA is a nonprofit charity that offers digital library services for free. Controlled digital lending is transformative because it expands the utility of books by allowing libraries to lend copies they own more efficiently and borrowers to use books in new ways. There is no dispute that libraries can lend the print copy of a book by mail to one person at a time. Controlled digital lending enables libraries to do the same thing via the Internet—still one person at a time. And even if this use were not transformative, it would still be favored under the first factor because it furthers copyright’s ultimate purpose of promoting public access to knowledge—a purpose libraries have served for centuries.

Second, the nature of the copyrighted works is neutral because the works are a mix of fiction and non-fiction and all are published.

Third, the amount of work copied is also neutral because copying the entire book is necessary: borrowing a book from a library requires access to all of it.

Fourth, IA’s lending does not harm Publishers’ markets. Controlled digital lending is not a substitute for Publishers’ ebook licenses because it offers a fundamentally different service. It enables libraries to efficiently lend books they own, while ebook licenses allow libraries to provide readers temporary access through commercial aggregators to whatever selection of books Publishers choose to make available, whether the library owns a copy or not. Two experts analyzed the available data and concluded that IA’s lending does not harm Publishers’ sales or ebook licensing. Publishers’ expert offered no contrary empirical evidence.

Weighing the fair use factors in light of copyright’s purposes, the use here is fair. In concluding otherwise, the district court misunderstood controlled digital lending, conflating it with posting an ebook online for anyone to access at any time. The court failed to grasp the key feature of controlled digital lending: the digital copy is available only to the one person entitled to borrow it at a time, just like lending a print book. This error tainted the district court’s analysis of all the factors, particularly the first and fourth. The court compounded that error by failing to weigh the factors in light of the purposes of copyright.

Not surprisingly, I agree with the Internet Archives’ arguments here, but these kinds of cases are always a challenge. Judges have this weird view of copyright law, that they sometimes ignore the actual law, the purpose of the law, and the constitutional underpinnings of the law, and insist that the purpose of copyright law is to award the copyright holders as much money and control as possible.

That’s not how copyright is supposed to work, but judges sometimes seem to forget that. Hopefully, the 2nd Circuit does not. The 2nd Circuit, historically, has been pretty good on fair use issues, so hopefully that holds in this case as well.

The full brief is (not surprisingly) quite well done and detailed and worth reading.

And now we’ll get to see whether or not Penguin Random House really supports “the free flow of ideas” or not…

Source: Internet Archive Files Opening Brief In Its Appeal Of Book Publishers’ Win | Techdirt

Internet Architecture Board hits out at US, EU, UK client-side scanning (spying on everything on your phone and pc all the time) plans – to save (heard it before?) kids

[…]

Apple brought widespread attention to this so-called client-side scanning in August 2021 when it announced plans to examine photos on iPhones and iPads before they were synced to iCloud, as a safeguard against the distribution of child sexual abuse material (CSAM). Under that plan, if someone’s files were deemed to be CSAM, the user could lose their iCloud account and be reported to the cops.

As the name suggests, client-side scanning involves software on a phone or some other device automatically analyzing files for unlawful photos and other content, and then performing some action – such as flagging or removing the documents or reporting them to the authorities. At issue, primarily, is the loss of privacy from the identification process – how will that work with strong encryption, and do the files need to be shared with an outside service? Then there’s the reporting process – how accurate is it, is there any human intervention, and what happens if your gadget wrongly fingers you to the cops?

The iGiant’s plan was pilloried by advocacy organizations and by customers on technical and privacy grounds. Ultimately Apple abandoned the effort and went ahead with offering iCloud encryption – a level of privacy that prompted political pushback at other tech titans.

Proposals for client-side scanning … mandate unrestricted access to private content and therefore undermine end-to-end encryption and bear the risk to become a widespread facilitator of surveillance and censorship

Client-side scanning has since reappeared, this time on legislative agendas. And the IAB – a research committee for the Internet Engineering Task Force (IETF), a crucial group of techies who help keep the ‘net glued together –thinks that’s a bad idea.

“A secure, resilient, and interoperable internet benefits the public interest and supports human rights to privacy and freedom of opinion and expression,” the IAB declared in a statement just before the weekend.

[…]

Specifically, the IAB cites Europe’s planned “Regulation laying down rules to prevent and combat child sexual abuse” (2022/0155(COD)), the UK Online Safety Act of 2023, and the US Earn-It Act, all of which contemplate regulatory regimes that have the potential to require the decryption of encrypted content in support of mandated surveillance.

The administrative body acknowledges the social harm done through the distribution of illegal content on the internet and the need to protect internet users. But it contends indiscriminate surveillance is not the answer.

The UK has already passed its Online Safety Act legislation, which authorizes telecom watchdog Ofcom to demand decryption of communications on grounds of child safety – though government officials have admitted that’s not technically feasible at the moment.

Europe, under fire for concealing those who have consulted on client-side scanning, and the US appears to be heading down a similar path.

For the IAB and IETF, client-side scanning initiatives echo other problematic technology proposals – including wiretaps, cryptographic backdoors, and pervasive monitoring.

“The IAB opposes technologies that foster surveillance as they weaken the user’s expectations of private communication which decreases the trust in the internet as the core communication platform of today’s society,” the organization wrote. “Mandatory client-side scanning creates a tool that is straightforward to abuse as a widespread facilitator of surveillance and censorship.”

[…]

Source: Internet Architecture Board hits out at client-side scanning • The Register

As soon as they take away privacy to save kids, you know they will expand the remit as governments have always done. The fact is that mass surveillance is not particularly effective, even with AI, except in making people feel watched and thus altering their behaviour. This feeling of always being spied upon is much much worse for whole generations of children than the tiny amount of sexual predators that may actually be caught.

Google Will Stop Telling Law Enforcement Which Users Were Near a Crime, start saving location data on the mobile device instead of their servers. But not really though. And Why?

So most of the breathless reporting on Googles “Updates to Location History and new controls coming soon to Maps” is a bit like this below. However Google itself in “Manage your Location History” says that if you have location history on, it will also save it to it’s servers. There is no mention of encryption.

Alphabet Inc.’s Google is changing its Maps tool so that the company no longer has access to users’ individual location histories, cutting off its ability to respond to law enforcement warrants that ask for data on everyone who was in the vicinity of a crime.

Google is changing its Location History feature on Google Maps, according to a blog post this week. The feature, which Google says is off by default, helps users remember where they’ve been. The company said Thursday that for users who have it enabled, location data will soon be saved directly on users’ devices, blocking Google from being able to see it, and, by extension, blocking law enforcement from being able to demand that information from Google.

“Your location information is personal,” said Marlo McGriff, director of product for Google Maps, in the blog post. “We’re committed to keeping it safe, private and in your control.”

The change comes three months after a Bloomberg Businessweek investigation that found police across the US were increasingly using warrants to obtain location and search data from Google, even for nonviolent cases, and even for people who had nothing to do with the crime.

“It’s well past time,” said Jennifer Lynch, the general counsel at the Electronic Frontier Foundation, a San Francisco-based nonprofit that defends digital civil liberties. “We’ve been calling on Google to make these changes for years, and I think it’s fantastic for Google users, because it means that they can take advantage of features like location history without having to fear that the police will get access to all of that data.”

Google said it would roll out the changes gradually through the next year on its own Android and Apple Inc.’s iOS mobile operating systems, and that users will receive a notification when the update comes to their account. The company won’t be able to respond to new geofence warrants once the update is complete, including for people who choose to save encrypted backups of their location data to the cloud.“It’s a good win for privacy rights and sets an example,” said Jake Laperruque, deputy director of the security and surveillance project at the Center for Democracy & Technology. The move validates what litigators defending the privacy of location data have long argued in court: that just because a company might hold data as part of its business operations, that doesn’t mean users have agreed the company has a right to share it with a third party.

Lynch, the EFF lawyer, said that while Google deserves credit for the move, it’s long been the only tech company that that the EFF and other civil-liberties groups have seen responding to geofence warrants. “It’s great that Google is doing this, but at the same time, nobody else has been storing and collecting data in the same way as Google,” she said. Apple, which also has an app for Maps, has said it’s technically unable to supply the sort of location data police want.

There’s still another kind of warrant that privacy advocates are concerned about: so-called reverse keyword search warrants, where police can ask a technology company to provide data on the people who have searched for a given term. “Search queries can be extremely sensitive, even if you’re just searching for an address,” Lynch said.

Source: Google Will Stop Telling Law Enforcement Which Users Were Near a Crime

The question is – why now? The market for location data is estimated at around $12 billion (source: There’s a Murky Multibillion-Dollar Market for Your Phone’s Location Data) If you look a tiny little bit, you see the government asking for it all the time, and the fines issued for breaching location data privacy seem to be tiny compared to the money made by selling it.

Google will be changing the name of Location History as well to Timeline – and will be saving your location to it’s servers (see heading When Location History is on)

:

Manage your Location History

In the coming months, the Location History setting name will change to Timeline. If Location History is turned on for your account, you may find Timeline in your app and account settings.

Location History is a Google Account setting that creates Timeline, a personal map that helps you remember:

  • Places you go
  • Routes to destinations
  • Trips you take

It can also give you personalized experiences across Google based on where you go.

When Location History is on, even when Google apps aren’t in use, your precise device location is regularly saved to:

  • Your devices
  • Google servers

To make Google experiences helpful for everyone, we may use your data to:

  • Show information based on anonymized location data, such as:
    • Popular times
    • Environmental insights
  • Detect and prevent fraud and abuse.
  • Improve and develop Google services, such as ads products.
  • Help businesses determine if people visit their stores because of an ad, if you have Web & App Activity turned on.
    • We share only anonymous estimates, not personal data, with businesses.
    • This activity can include info about your location from your device’s general area and IP address.

Learn more about how Google uses location data.

Things to know about Location History:

  • Location History is off by default. We can only use it if you turn Location History on.
  • You can turn off Location History at any time in your Google Account’s Activity controls.
  • You can review and manage your Location History. You can:
    • Review places you’ve been in Google Maps Timeline.
    • Edit or delete your Location History anytime.

Important: Some of these steps work only on Android 8.0 and up. Learn how to check your Android version.

Turn Location History on or off

You can turn off Location History for your account at any time. If you use a work or school account, your administrator needs to make this setting available for you. If they do, you’ll be able to use Location History as any other user.

  1. Go to the “Location History” section of your Google Account.
  2. Choose whether your account or your devices can report Location History to Google.
    • Your account and all devices: At the top, turn Location History on or off.
    • Only a certain device: Under “This device” or “Devices on this account,” turn the device on or off.

When Location History is on

Google can estimate your location with:

  • Signals like Wi-Fi and mobile networks
  • GPS
  • Sensor information

Your device location may also periodically be used in the background. When Location History is on, even when Google apps aren’t in use, your device’s precise location is regularly saved to:

  • Your devices
  • Google servers

When you’re signed in with your Google Account, it saves the Location History of each device with the setting “Devices on this account” turned on You can find this setting in the Location History settings on your Google Account.

You can choose which devices provide their location data to Location History. Your settings don’t change for other location services on your device, such as:

When Location History is off

Your device doesn’t save its location to your Location History.

  • You may have previous Location History data in your account. You can manually delete it anytime.
  • Your settings don’t change for other location services on your device, such as:
  • If settings like Web and App Activity are on but you turn off Location History or delete location data from Location History, your Google Account may still save location data as part of your use of other Google sites, apps, and services. This activity can include info about your location from your device’s general area and IP address.

Delete Location History

You can manage and delete your Location History information with Google Maps Timeline. You can choose to delete all of your history, or only parts of it.

Important: When you delete Location History information from Timeline, you won’t be able to see it again.

Automatically delete your Location History

You can choose to automatically delete Location History that’s older than 3 months, 18 months, or 36 months.

What happens after you delete some or all Location History

If you delete some or all of your Location History, personalized experiences across Google may degrade or or be lost. For example, you may lose:

  • Recommendations based on places you visit
  • Real-time information about when best to leave for home or work to beat traffic

Important: If you have other settings like Web & App Activity turned on and you pause Location History or delete location data from Location History, you may still have location data saved in your Google Account as part of your use of other Google sites, apps, and services. For example, location data may be saved as part of activity on Search and Maps when your Web & App Activity setting is on, and included in your photos depending on your camera app settings. Web & App Activity can include info about your location from your device’s general area and IP address.

Learn about use & diagnostics for Location History

After you turn on Location History, your device may send diagnostic information to Google about what works or doesn’t work for Location History. Google processes any information it collects under Google’s privacy policy.

 

Learn more about other location settings

Source: Manage your Location History

 

 

Copyright Troll Porn Company Makes Millions By Shaming Potential Porn Consumers

In 1999 Los Angeles Times reporter Michael Hiltzik co-authored a Pulitzer Prize-winning story. Now a business columnist for the Times, he writes that a Southern California maker of pornographic films named Strike 3 Holdings is also “a copyright troll,” according to U.S. Judge Royce C. Lamberth: Lamberth cwrote in 2018, “Armed with hundreds of cut-and-pasted complaints and boilerplate discovery motions, Strike 3 floods this courthouse (and others around the country) with lawsuits smacking of extortion. It treats this Court not as a citadel of justice, but as an ATM.” He likened its litigation strategy to a “high-tech shakedown.” Lamberth was not speaking off the cuff. Since September 2017, Strike 3 has filed more than 12,440 lawsuits in federal courts alleging that defendants infringed its copyrights by downloading its movies via BitTorrent, an online service on which unauthorized content can be accessed by almost anyone with a computer and internet connection.

That includes 3,311 cases the firm filed this year, more than 550 in federal courts in California. On some days, scores of filings reach federal courthouses — on Nov. 17, to select a date at random, the firm filed 60 lawsuits nationwide… Typically, they are settled for what lawyers say are cash payments in the four or five figures or are dismissed outright…

It’s impossible to pinpoint the profits that can be made from this courthouse strategy. J. Curtis Edmondson, a Portland, Oregon, lawyer who is among the few who pushed back against a Strike 3 case and won, estimates that Strike 3 “pulls in about $15 million to $20 million a year from its lawsuits.” That would make the cases “way more profitable than selling their product….” If only one-third of its more than 12,000 lawsuits produced settlements averaging as little as $5,000 each, the yield would come to $20 million… The volume of Strike 3 cases has increased every year — from 1,932 in 2021 to 2,879 last year and 3,311 this year.

What’s really needed is a change in copyright law to bring the statutory damages down to a level that truly reflects the value of a film lost because of unauthorized downloading — not $750 or $150,000 but perhaps a few hundred dollars.

Anone of the lawsuits go to trial. Instead ISPs get a subpoena demanding the real-world address and name behind IP addresses “ostensibly used to download content from BitTorrent…” according to the article. Strike 3 will then “proceed by sending a letter implicitly threatening the subscriber with public exposure as a pornography viewer and explicitly with the statutory penalties for infringement written into federal copyright law — up to $150,000 for each example of willful infringement and from $750 to $30,0000 otherwise.”

A federal judge in Connecticut wrote last year that “Given the nature of the films at issue, defendants may feel coerced to settle these suits merely to prevent public disclosure of their identifying information, even if they believe they have been misidentified.”

Source: Copyright Troll’ Porn Company ‘Makes Millions By Shaming Porn Consumers’ (yahoo.com)

Artificial intelligence and copyright – WIPO

[…]

Robotic artists have been involved in various types of creative works for a long time. Since the 1970s computers have been producing crude works of art, and these efforts continue today. Most of these computer-generated works of art relied heavily on the creative input of the programmer; the machine was at most an instrument or a tool very much like a brush or canvas

[…]

. When applied to art, music and literary works, machine learning algorithms are actually learning from input provided by programmers. They learn from these data to generate a new piece of work, making independent decisions throughout the process to determine what the new work looks like. An important feature for this type of artificial intelligence is that while programmers can set parameters, the work is actually generated by the computer program itself – referred to as a neural network – in a process akin to the thought processes of humans.

[…]

Creating works using artificial intelligence could have very important implications for copyright law. Traditionally, the ownership of copyright in computer-generated works was not in question because the program was merely a tool that supported the creative process, very much like a pen and paper. Creative works qualify for copyright protection if they are original, with most definitions of originality requiring a human author. Most jurisdictions, including Spain and Germany, state that only works created by a human can be protected by copyright.

But with the latest types of artificial intelligence, the computer program is no longer a tool; it actually makes many of the decisions involved in the creative process without human intervention.

Commercial impact

One could argue that this distinction is not important, but the manner in which the law tackles new types of machine-driven creativity could have far-reaching commercial implications. Artificial intelligence is already being used to generate works in music, journalism and gaming. These works could in theory be deemed free of copyright because they are not created by a human author. As such, they could be freely used and reused by anyone. That would be very bad news for the companies selling the works.

[…]

If developers doubt whether creations generated through machine learning qualify for copyright protection, what is the incentive to invest in such systems? On the other hand, deploying artificial intelligence to handle time-consuming endeavors could still be justified, given the savings accrued in personnel costs, but it is too early to tell.

[…]

There are two ways in which copyright law can deal with works where human interaction is minimal or non-existent. It can either deny copyright protection for works that have been generated by a computer or it can attribute authorship of such works to the creator of the program.

[…]

Should the law recognize the contribution of the programmer or the user of that program? In the analogue world, this is like asking whether copyright should be conferred on the maker of a pen or the writer. Why, then, could the existing ambiguity prove problematic in the digital world? Take the case of Microsoft Word. Microsoft developed the Word computer program but clearly does not own every piece of work produced using that software. The copyright lies with the user, i.e. the author who used the program to create his or her work. But when it comes to artificial intelligence algorithms that are capable of generating a work, the user’s contribution to the creative process may simply be to press a button so the machine can do its thing.

[…]

Monumental advances in computing and the sheer amount of available computational power may well make the distinction moot; when you give a machine the capacity to learn styles from large datasets of content, it will become ever better at mimicking humans. And given enough computing power, soon we may not be able to distinguish between human-generated and machine-generated content. We are not yet at that stage, but if and when we do get there, we will have to decide what type of protection, if any, we should give to emergent works created by intelligent algorithms with little or no human intervention

[…]

 

Source: Artificial intelligence and copyright

It’s interesting to read that in 2017 the training material used is considered irrelevant to the output – as it should be. The books and art that go into AI’s are just like the books and art that go into humans. The derived works that AI’s and humans make belong to them, not to the content it is based on. And just because an AI – just like a human – can quote the original source material doesn’t change that.

Things That Make No Sense: Epic Lost Its Fight Over Apple’s Closed iOS Platform, But Won It Over Google’s More Open Android Platform

When Epic went after both Apple and Google a few years ago with antitrust claims regarding the need to go through their app stores to get on phones, we noted that it seemed more like negotiation-by-lawsuit. Both Apple and Google have cut some deals with larger companies to lower the 30% cut the companies take on app payments, and it seemed like these lawsuits were just an attempt to get leverage. That was especially true with regards to the complaint against Google, given that it’s much, much easier to route around the Google Play Store and get apps onto an Android phone.

Google allows sideloading. Google allows third party app stores. While it may discourage those things, Android is way more open than iOS, where you really can’t get your app on the phone unless Apple says you can.

Still, it was little surprise that Apple mostly won at a bench trial in 2021. Or that the 9th Circuit upheld the victory earlier this year. The 9th Circuit made it clear that Apple is free to set whatever rules it wants to play in its ecosystem.

Given all that, I had barely paid attention to the latest trial, which was basically the same case against Google. But, rather than a bench trial, this one was a jury trial. And, juries, man, they sure can be stupid sometimes.

The jury sided with Epic against Google.

That leaves things in a very, very weird stance. Apple, whose system is much more closed off and where Apple denies any ability for third parties to get on the phone without Apple’s permission is… fine and dandy. Whereas, Google, which may discourage, but does allow third party apps and third party app stores… is somehow a monopolist?

It’s hard to see how that state of affairs makes any sense at all.

Google has said it will appeal, but overturning jury rulings is… not easy.

That said, even if the ruling is upheld… it might not be such a bad thing. Epic has said that it’s not asking for money, but rather to have it made clear that Epic can launch its own app stores without restriction from Google, along with the freedom to use its own billing system.

And, uh, yeah. Epic should be able to do that. Having more app stores and more alternatives on app payments would be a good thing for everyone except Google, and that’s good.

So I don’t necessarily have a problem with the overall outcome. I’m just confused how these two rulings can possibly be considered consistent, or how they give any guidance whatsoever to others. I mean, one takeaway is that if you’re creating an ecosystem for 3rd party apps, you’re better off taking the closed Apple route. And, that would be bad.

Source: Things That Make No Sense: Epic Lost Its Fight Over Apple’s Closed iOS Platform, But Won It Over Google’s More Open Android Platform | Techdirt

MEPs exclude audiovisual sector in geo-blocking regulation reassessment – Sabine Verheyen shows who’s pocket she is in.

In 2018, the European Parliament voted to ban geo-blocking, meaning blocking access to a network based on someone’s location. Geo-blocking systems block or authorise access to content based on where the user is located.

On Wednesday, following a 2020 evaluation by the Commission on the regulation, MEPs advocated for reassessing geo-blocking, taking into account increased demand for online shopping in recent years.

Polish MEP Beata Mazurek from the Conservative group, who was the rapporteur for the file, said ahead of the vote in her speech that “the geo-blocking regulation will remove unjustified barriers for consumers and companies working within the single market”.

“We need to do something when it comes to online payments and stop discrimination on what your nationality happens to be or where you happen to live. When internet purchases are being made, barriers need to be removed. We need to have a complete right to access a better selection of goods and services through Europe,” she said.

While the original text of the regulation banned geo-blocking, due to discrimination, for example, as Mazurek pointed out, a new amendment goes against this, saying this would result in revenue loss and higher prices for consumers.

The new legislation approved by European Parliament requires websites to sell their goods throughout the EU regardless of the country the buyer resides in. It could apply to online cultural content like music streaming and ebooks within two years. EURACTIV.fr report

Audiovisual content

According to Mazurek, fighting price discrimination entails making deliveries easier across borders and making movies, series, and sporting events accessible in one’s native language.

“The Commission should carefully assess the options for updating the current rules and provide the support the audio-visual sector’s needs,” she added.

However, in a last-minute amendment adopted during the plenary vote, MEP Sabine Verheyen, an influential member of the Parliament’s culture committee, completely flipped the wording that applies to the audiovisual sector, such as the streaming of platforms’ films.

According to Verheyen’s amendment, removing geo-blocking in this area “would result in a significant loss of revenue, putting investment in new content at risk, while eroding contractual freedom and reducing cultural diversity in content production, distribution, promotion and exhibition”.

It also emphasises that the inclusion would result “in fewer distribution channels”, and so, ultimately, consumers would have to pay more.

Mazurek said before the vote that while the report deals with audiovisual material, they “would like to see this done in a step-by-step way, bearing in mind the particular circumstances pertaining to the creative sector”.

“We want to look at the position of the interested parties without threatening the way cultural projects are financed. That might be regarded as a revolutionary approach, but we need to look at technological progress and the consumer needs which have changed over the last few years,” the MEP explained.

Yet, Wednesday’s vote on this specific amendment means the opposite as it did in the original regulation, with lawmakers now being against ending geo-blocking for audiovisual material.

Grégoire Polad, Director General of the Association of Commercial Television and Video on Demand Services in Europe (ACT), stressed that the European Parliament and the EU Council of Ministers “have now made it abundantly clear that there is no political support for any present or future inclusion of the audiovisual sector in the scope of the Geo-blocking regulation.”

The European Parliament adopted a report on Tuesday (9 May), on the implementation of the Audiovisual Media Services Directive (AVMSD), including criticism of the belated transposition from certain EU countries.

However, the European Consumer Organisation threw its weight against the carve-out for the audiovisual and creative sectors in the regulation, calling on policymakers to make audiovisual content available across borders.

A Commission spokesperson told Euractiv that they are aware of the “ongoing debate” and “will carefully analyse its content, including proposals related to the audiovisual content”, once it is adopted.

“The Commission engaged in a dialogue with the audiovisual sector aimed at identifying industry-led solutions to improve the availability and cross-border access to audiovisual content across the EU,” the spokesperson explained.

This stakeholder dialogue ended in December 2022, and the Commission will consider its conclusions in the upcoming stocktaking exercise on the Geo-blocking Regulation.

Source: MEPs exclude audiovisual sector in geo-blocking regulation reassessment – EURACTIV.com

Strangely enough this is the one sector that is wholly digital and where geoblocking makes the least sense, as digital goods are moved globally for exactly the same cost, whereas physical goods need different logistics chains, where the last step to the consumer is only a tiny part of that chain. The logistical steps before they get sent from the website mean that geography actually can have a measurable effect on cost.

The movie / TV / digital rights bozo’s definitely have a big lobby on this one, and shows the corruption – or outright stupidity – in the EP. Yes, Sabine Verheyen, you must be one or the other.

US Law enforcement can obtain prescription records from pharmacy giants without a warrant

America’s eight largest pharmacy providers shared customers’ prescription records to law enforcement when faced with subpoena requests, The Washington Post reported Tuesday. The news arrives amid patients’ growing privacy concerns in the wake of the Supreme Court’s 2022 overturn of Roe v. Wade.

The new look into the legal workarounds was first detailed in a letter sent by Sen. Ron Wyden (D-OR) and Reps. Pramila Jayapal (D-WA) and Sara Jacobs (D-CA) on December 11 to the secretary of the Department of Health and Human Services.

Pharmacies can hand over detailed, potentially compromising information due to legal fine print. Health Insurance Portability and Accountability Act (HIPAA) regulations restrict patient data sharing between “covered entities” like doctor offices, hospitals, and other medical facilities—but these guidelines are looser for pharmacies. And while search warrants require a judge’s approval to serve, subpoenas do not.

[…]

Given each company’s national network, patient records are often shared interstate between any pharmacy location. This could become legally fraught for medical history access within states that already have—or are working to enact—restrictive medical access laws. In an essay written for The Yale Law Journal last year, cited by WaPo, University of Connecticut associate law professor Carly Zubrzycki argued, “In the context of abortion—and other controversial forms of healthcare, like gender-affirming treatments—this means that cutting-edge legislative protections for medical records fall short.”

[…]

Source: Law enforcements can obtain prescription records from pharmacy giants without a warrant | Popular Science

Italian “Piracy Shield” Instant Facisct Takedown Orders Apply to All ISPs, DNS & VPN Providers & Google

Italy’s Piracy Shield anti-piracy system reportedly launched last week, albeit in limited fashion.

Whether the platform had any impact on pirate IPTV providers offering the big game last Friday is unclear but plans supporting a full-on assault are pressing ahead.

[…]

When lawmakers gave Italy’s new blocking regime the green light during the summer, the text made it clear that blocking instructions would not be limited to regular ISPs. The relevant section (Paragraph 5 Art. 2) for reference below;

 

italy - All must block
 

The document issued by AGCOM acts as a clear reminder of the above and specifically highlights that VPN and DNS providers are no exception.

“[A]ll parties in any capacity involved in the accessibility of illegally disseminated content – and therefore also, by way of example and not limitation – VPN and open DNS service providers, will have to execute the blocks requested by the Authority [AGCOM] including through accreditation to the Piracy Shield platform or otherwise implementing measures that prevent the user from reaching that content,” the notice reads.

Whether the DNS provider requirement will be affected by Cloudflare’s recent win over Sony in Germany is unclear. The decision was grounded in EU law and Cloudflare has already signaled that it will push back against any future blocking demands.

[…]

The relevant section of the new law is in some ways even more broad when it comes to search engines such as Google. Whether they are directly involved in accessibility or not, they’re still required to take action.

 

italy - search block
 

AGCOM suggests that Google understands its obligations and is also prepared to take things further. The company says it will deindex offending platforms from search and also remove their ability to advertise.

“Since this is a dynamic blocking, the search engine therefore undertakes to perform de-indexing of all websites/telematic addresses that are the subject of subsequent reports that can also be communicated by rights holders accredited to the platform,” AGCOM writes.

[…]

Source: Piracy Shield: IPTV Blocking Orders Apply to All DNS & VPN Providers * TorrentFreak

Wow. This means we can force an ISP, VPN provider, DNS host and Google to shut down shit without explanation or recourse within 30 minutes. That’s pretty totalitarian.

Proposed US surveillance regime makes anyone with a modem a big brother spy. Choice is between full on spying and full on spying.

Under rules being considered, any telecom service provider or business with custodial access to telecom equipment – a hotel IT technician, an employee at a cafe with Wi-Fi, or a contractor responsible for installing home broadband router – could be compelled to enable electronic surveillance. And this would apply not only to those involved with data transit and data storage.

This week, the US House of Representatives is expected to conduct a floor vote on two bills that reauthorize Section 702 of the Foreign Intelligence Surveillance Act (FISA), which is set to expire in 2024.

Section 702, as The Register noted last week, permits US authorities to intercept the electronic communications of people outside the US for foreign intelligence purposes – without a warrant – even if that communication involves US citizens and permanent residents.

As the Electronic Frontier Foundation argues, Section 702 has allowed the FBI to conduct invasive, warrantless searches of protesters, political donors, journalists, protesters, and even members of Congress.

More than a few people would therefore be perfectly happy if the law lapsed – on the other hand, law enforcement agencies insist they need Section 702 to safeguard national security.

The pending vote is expected to be conducted under “Queen-of-the-Hill Rules,” which in this instance might also be described as “Thunderdome” – two bills enter, one bill leaves, with the survivor advancing to the US Senate for consideration. The prospect that neither would be approved and Section 702 would lapse appears … unlikely.

The two bills are: HR 6570, the Protect Liberty and End Warrantless Surveillance Act; and HR 6611, the FISA Reform and Reauthorization Act (FRRA) of 2023 (FRRA).

The former reauthorizes Section 702, but with strong civil liberties and privacy provisions. The civil rights community has lined up to support it.

As for the latter, Elizabeth Goitein, co-director of the Liberty and National Security Program at legal think tank the Brennan Center for Justice, explained that the FRRA changes the definition of electronic communication service provider (ECSP) in a way that expands the range of businesses required to share data with the US.

“Going forward, it would not just be entities that have direct access to communications, like email and phone service providers, that could be required to turn over communications,” argues a paper prepared by the Brennan Center. “Any business that has access to ‘equipment’ on which communications are stored and transmitted would be fair game.”

According to Goitein, the bill’s sponsors have denied the language is intended to be interpreted so broadly.

A highly redacted FISA Court of Review opinion [PDF], released a few months ago, showed that the government has already pushed the bounds of the definition.

The court document discussed a petition to compel an unidentified entity to conduct surveillance. The petition was denied because the entity did not satisfy the definition of “electronic communication service provider,” and was instead deemed to be a provider of a product or service. That definition may change, it seems.

Goitein is not alone in her concern about the ECSP definition. She noted that a FISA Court amici – the law firm ZwillGen – has taken the unusual step of speaking out against the expanded definition of an ECSP.

In an assessment published last week, ZwillGen attorneys Marc Zwillinger and Steve Lane raised concerns about the FRRA covering a broad set of businesses and their employees.

“By including any ‘service provider’ – rather than any ‘other communication service provider’ – that has access not just to communications, but also to the ‘equipment that is being or may be used to transmit or store … communications,’ the expanded definition would appear to cover datacenters, colocation providers, business landlords, shared workspaces, or even hotels where guests connect to the internet,” they explained. They added that the addition of the term “custodian” to the service provider definition makes it apply to any third party providing equipment, storage – or even cleaning services.

The Brennan Center paper also raised other concerns – like the exemption for members of Congress from such surveillance. The FRRA bill requires the FBI to get permission from a member of Congress when it wants to conduct a query of their communications. No such courtesy is afforded to the people these members of Congress represent.

Goitein urged Americans to contact their representative and ask for a “no” vote on the FRRA and a “yes” on HR 6570, the Protect Liberty and End Warrantless Surveillance Act. ®

Source: Proposed US surveillance regime would enlist more businesses • The Register

Bad genes: 23andMe leak highlights a possible future of genetic discrimination

23andMe is a terrific concept. In essence, the company takes a sample of your DNA and tells you about your genetic makeup. For some of us, this is the only way to learn about our heritage. Spotty records, diaspora, mistaken family lore and slavery can make tracing one’s roots incredibly difficult by traditional methods.

What 23andMe does is wonderful because your DNA is fixed. Your genes tell a story that supersedes any rumors that you come from a particular country or are descended from so-and-so.

[…]

ou can replace your Social Security number, albeit with some hassle, if it is ever compromised. You can cancel your credit card with the click of a button if it is stolen. But your DNA cannot be returned for a new set — you just have what you are given. If bad actors steal or sell your genetic information, there is nothing you can do about it.

This is why 23andMe’s Oct. 6 data leak, although it reads like science fiction, is not an omen of some dark future. It is, rather, an emblem of our dangerous present.

23andMe has a very simple interface with some interesting features. “DNA Relatives” matches you with other members to whom you are related. This could be an effective, thoroughly modern way to connect with long-lost family, or to learn more about your origins.

But the Oct. 6 leak perverted this feature into something alarming. By gaining access to individual accounts through weak and recycled passwords, hackers were able to create an extensive list of people with Ashkenazi heritage. This list was then posted on forums with the names, sex and likely heritage of each member under the title “Ashkenazi DNA Data of Celebrities.”

First and foremost, collecting lists of people based on their ethnic backgrounds is a personal violation with tremendously insidious undertones. If you saw yourself and your extended family on such a list, you would not take it lightly.

[…]

I find it troubling because, in 2018, Time reported that 23andMe had sold a $300 million stake in its business to GlaxoSmithKline, allowing the pharmaceutical giant to use users’ genetic data to develop new drugs. So because you wanted to know if your grandmother was telling the truth about your roots, you spat into a cup and paid 23andMe to give your DNA to a drug company to do with it as they please.

Although 23andMe is in the crosshairs of this particular leak, there are many companies in murky waters. Last year, Consumer Reports found that 23andMe and its competitors had decent privacy policies where DNA was involved, but that these businesses “over-collect personal information about you and overshare some of your data with third parties…CR’s privacy experts say it’s unclear why collecting and then sharing much of this data is necessary to provide you the services they offer.”

[…]

As it stands, your DNA can be weaponized against you by law enforcement, insurance companies, and big pharma. But this will not be limited to you. Your DNA belongs to your whole family.

Pretend that you are going up against one other candidate for a senior role at a giant corporation. If one of these genealogy companies determines that you are at an outsized risk for a debilitating disease like Parkinson’s and your rival is not, do you think that this corporation won’t take that into account?

[…]

Insurance companies are not in the business of losing money either. If they gain access to such a thing that on your record, you can trust that they will use it to blackball you or jack up your rates.

In short, the world risks becoming like that of the film Gattaca, where the genetic elite enjoy access while those deemed genetically inferior are marginalized.

The train has left the station for a lot of these issues. That list of people from the 23andMe leak cannot put the genie back in the bottle. If your DNA is on a server for one of these companies, there is a chance that it has already been used as a reference or to help pharmaceutical companies.

[…]

There are things they can do now to avoid further damage. The next time a company asks for something like your phone number or SSN, press them as to why they need it. Make it inconvenient for them to mine you for your Personal Identifiable Information (PII). Your PII has concrete value to these places, and they count on people to be passive, to hand it over without any fuss.

[…]

The time to start worrying about this problem was 20 years ago, but we can still affect positive change today. This 23andMe leak is only the beginning; we must do everything possible to protect our identities and DNA while they still belong to us.

Source: Bad genes: 23andMe leak highlights a possible future of genetic discrimination | The Hill

Scientific American was warning about this since at least 2013. What have we done? Nothing.:

If there’s a gene for hubris, the 23andMe crew has certainly got it. Last Friday the U.S. Food and Drug Administration (FDA) ordered the genetic-testing company immediately to stop selling its flagship product, its $99 “Personal Genome Service” kit. In response, the company cooed that its “relationship with the FDA is extremely important to us” and continued hawking its wares as if nothing had happened. Although the agency is right to sound a warning about 23andMe, it’s doing so for the wrong reasons.

Since late 2007, 23andMe has been known for offering cut-rate genetic testing. Spit in a vial, send it in, and the company will look at thousands of regions in your DNA that are known to vary from human to human—and which are responsible for some of our traits

[…]

Everything seemed rosy until, in what a veteran Forbes reporter calls “the single dumbest regulatory strategy [he had] seen in 13 years of covering the Food and Drug Administration,” 23andMe changed its strategy. It apparently blew through its FDA deadlines, effectively annulling the clearance process, and abruptly cut off contact with the agency in May. Adding insult to injury the company started an aggressive advertising campaign (“Know more about your health!”)

[…]

But as the FDA frets about the accuracy of 23andMe’s tests, it is missing their true function, and consequently the agency has no clue about the real dangers they pose. The Personal Genome Service isn’t primarily intended to be a medical device. It is a mechanism meant to be a front end for a massive information-gathering operation against an unwitting public.

Sound paranoid? Consider the case of Google. (One of the founders of 23andMe, Anne Wojcicki, is presently married to Sergei Brin, the founder of Google.) When it first launched, Google billed itself as a faithful servant of the consumer, a company devoted only to building the best tool to help us satisfy our cravings for information on the web. And Google’s search engine did just that. But as we now know, the fundamental purpose of the company wasn’t to help us search, but to hoard information. Every search query entered into its computers is stored indefinitely. Joined with information gleaned from cookies that Google plants in our browsers, along with personally identifiable data that dribbles from our computer hardware and from our networks, and with the amazing volumes of information that we always seem willing to share with perfect strangers—even corporate ones—that data store has become Google’s real asset

[…]

23andMe reserves the right to use your personal information—including your genome—to inform you about events and to try to sell you products and services. There is a much more lucrative market waiting in the wings, too. One could easily imagine how insurance companies and pharmaceutical firms might be interested in getting their hands on your genetic information, the better to sell you products (or deny them to you).

[…]

ven though 23andMe currently asks permission to use your genetic information for scientific research, the company has explicitly stated that its database-sifting scientific work “does not constitute research on human subjects,” meaning that it is not subject to the rules and regulations that are supposed to protect experimental subjects’ privacy and welfare.

Those of us who have not volunteered to be a part of the grand experiment have even less protection. Even if 23andMe keeps your genome confidential against hackers, corporate takeovers, and the temptations of filthy lucre forever and ever, there is plenty of evidence that there is no such thing as an “anonymous” genome anymore. It is possible to use the internet to identify the owner of a snippet of genetic information and it is getting easier day by day.

This becomes a particularly acute problem once you realize that every one of your relatives who spits in a 23andMe vial is giving the company a not-inconsiderable bit of your own genetic information to the company along with their own. If you have several close relatives who are already in 23andMe’s database, the company already essentially has all that it needs to know about you.

[…]

Source: 23andMe Is Terrifying, but Not for the Reasons the FDA Thinks

Governments, Apple, Google spying on users through push notifications – they all go through Apple and Google servers (unencrypted?)!

In a letter to the Department of Justice, Senator Ron Wyden said foreign officials were demanding the data from Alphabet’s (GOOGL.O) Google and Apple (AAPL.O). Although details were sparse, the letter lays out yet another path by which governments can track smartphones.

Apps of all kinds rely on push notifications to alert smartphone users to incoming messages, breaking news, and other updates. These are the audible “dings” or visual indicators users get when they receive an email or their sports team wins a game. What users often do not realize is that almost all such notifications travel over Google and Apple’s servers.

That gives the two companies unique insight into the traffic flowing from those apps to their users, and in turn puts them “in a unique position to facilitate government surveillance of how users are using particular apps,” Wyden said. He asked the Department of Justice to “repeal or modify any policies” that hindered public discussions of push notification spying.

In a statement, Apple said that Wyden’s letter gave them the opening they needed to share more details with the public about how governments monitored push notifications.

“In this case, the federal government prohibited us from sharing any information,” the company said in a statement. “Now that this method has become public we are updating our transparency reporting to detail these kinds of requests.”

Google said that it shared Wyden’s “commitment to keeping users informed about these requests.”

The Department of Justice did not return messages seeking comment on the push notification surveillance or whether it had prevented Apple of Google from talking about it.

Wyden’s letter cited a “tip” as the source of the information about the surveillance. His staff did not elaborate on the tip, but a source familiar with the matter confirmed that both foreign and U.S. government agencies have been asking Apple and Google for metadata related to push notifications to, for example, help tie anonymous users of messaging apps to specific Apple or Google accounts.

The source declined to identify the foreign governments involved in making the requests but described them as democracies allied to the United States.

The source said they did not know how long such information had been gathered in that way.

Most users give push notifications little thought, but they have occasionally attracted attention from technologists because of the difficulty of deploying them without sending data to Google or Apple.

Earlier this year French developer David Libeau said users and developers were often unaware of how their apps emitted data to the U.S. tech giants via push notifications, calling them “a privacy nightmare.”

Source: Governments spying on Apple, Google users through push notifications – US senator | Reuters