Both YouTube and Facebook allow politicians to ignore their community standards.

Facebook this week finally put into writing what users—especially politically powerful users—have known for years: its community “standards” do not, in fact, apply across the whole community. Speech from politicians is officially exempt from the platform’s fact checking and decency standards, the company has clarified, with a few exceptions.

Facebook communications VP Nick Clegg, himself a former member of the UK Parliament, outlined the policy in a speech and company blog post Tuesday.

Facebook has had a “newsworthiness exemption” to its content guidelines since 2016. That policy was formalized in late October of that year amid a contentious and chaotic US political season and three weeks before the presidential election that would land Donald Trump the White House.

Facebook at the time was uncertain how to handle posts from the Trump campaign, The Wall Street Journal reported. Sources told the paper that Facebook employees were sharply divided over the candidate’s rhetoric about Muslim immigrants and his stated desire for a Muslim travel ban, which several felt were in violation of the service’s hate speech standards. Eventually, the sources said, CEO Mark Zuckerberg weighed in directly and said it would be inappropriate to intervene. Months later, Facebook finally issued its policy.

“We’re going to begin allowing more items that people find newsworthy, significant, or important to the public interest—even if they might otherwise violate our standards,” Facebook wrote at the time.

Clegg’s update says that Facebook by default “will treat speech from politicians as newsworthy content that should, as a general rule, be seen and heard.” Nor will it be subject to fact-checking, as the company does not believe that it is appropriate for it to “referee political debates” or prevent a polician’s speech from both reaching its intended audience and “being subject to public debate and scrutiny.”

https://arstechnica.com/tech-policy/2019/09/facebook-confirms-its-standards-dont-apply-to-politicians/

YouTube CEO Susan Wojcicki said today that content by politicians would stay up on the video-sharing website even if it violates the company’s standards, echoing a position staked out by Facebook this week.

“When you have a political officer that is making information that is really important for their constituents to see, or for other global leaders to see, that is content that we would leave up because we think it’s important for other people to see,” Wojcicki told an audience at The Atlantic Festival this morning.

Wojcicki said the news media is likely to cover controversial content regardless of whether it’s taken down, giving context to understand it. YouTube is owned by Google.

A YouTube spokesperson later told POLITICO that politicians are not treated differently than other users and must abide by its community guidelines. The company grants exemptions to some political speech if the company considers it to be educational, documentary, scientific, or artistic in nature.Morning Tech

Social media firms have seen their policies for reviewing and removing content come under fire in recent years, particularly when such content endorses hate-filled views or incites real-world violence. The issue is even more prickly when it involves world leaders like President Donald Trump, who has used bullying or violent language in social media posts.

YouTube CEO: Politicians can break our content rules

But what constitutes a politician? Anyone in or running for office? What about public servants? County sherrifs? And who decides which of these groups are exempt? That’s the problem with exceptions, you get to make more and more exceptions until almost everyone is an exception.

US immigration uses Google Translate to scan people’s social media for bad posts – Er, don’t do that, says everyone else, including Google

Google recommends that anyone using its translation technology add a disclaimer that translated text may not be accurate.

The US government’s Citizenship and Immigration Services (USCIS) nonetheless has been relying on online translation services offered by Google, Microsoft, and Yahoo to read refugees’ non-English social media posts and judge whether or not they should be allowed into the Land of the Free™.

According to a report from ProPublica, USCIS uses these tools to help evaluate whether refugees should be allowed into the US. In so doing, agency personnel are putting their trust in an untrustworthy algorithm to make entry decisions that may have profound consequences for the health and welfare of those seeking admission to the country.

“The translation of these social media posts can mean life or death for refugees seeking to reunite with their family members,” said Betsy Fisher, director of strategy for the International Refugee Assistance Project (IRAP),” in an email to The Register. “It is dangerous to rely on inadequate technology to inform these unreasonable procedures ostensibly used to vet refugees.”

IRAP obtained a USCIS manual through a public records request and shared it with ProPublica. The manual advises USCIS personnel to use free online translation tools and provides a walkthrough for using Google Translate.

Scanning social media posts for content that would disqualify entry into the US follows from a 2017 executive order and memorandum. The impact of social media scrutiny was made clear recently when Ismail Ajjawi, a resident of Lebanon admitted to Harvard’s class of 2023, was denied entry into America by US Customs and Border Protection because of anti-US posts apparently made by friends.

After ten days of pressure from student petitioners and advocacy groups, CBP determined Ajjawi met its requirements for US entry after all.

To demonstrate the inaccuracy of Google Translate, ProPublica asked Mustafa Menai, who teaches Urdu at the University of Pennsylvania, to translate a Twitter post written in Urdu. By Menai’s estimation, an accurate English translation would be, “I have been spanked a lot and have also gathered a lot of love (from my parents).”

Google Translate’s rendering of the post is, “The beating is too big and the love is too windy.”

Source: US immigration uses Google Translate to scan people’s social media for bad posts – Er, don’t do that, says everyone else • The Register

Card stealing MageCart infection swipes customers details and payment cards from fragrancedirect.co.uk

Online merchant fragrancedirect.co.uk has confirmed a miscreant broke into its systems and made off with a raft of customers’ personal data, including payment card details.

The e-retailer, based in Macclesfield, England, wrote to punters this week to inform them of the digital burglary and the subsequent data leakage.

“We recently discovered that some of our user data may have been compromised as a result of unauthorised access to our website by a malicious third party,” the email states.

The online store then launched an investigation and “quickly identified the root cause and have taken the necessary steps to address the issue”, the note continues.

It added that “Fragrance Direct Username and Password”, along with “Name, Address and Phone Number”, and “Credit and Debit Card Details” spilled into the wrong hands.

Source: What’s that smell? Perfume merchant senses the scent of a digital burglary • The Register

Doordash  Food delivery services Latest Data Breach – 4.9m people have their physical addresses floating around the internet now

Doordash is the latest of the “services you probably use, or at least have an account with” companies to suffer a large data breach. And while your passwords likely haven’t been compromised, it’s possible that your physical address is floating around in the Internet somewhere, among other identifying information.

As Doordash wrote yesterday, an unknown individual accessed data they shouldn’t have on May 4. Among the information that was compromised included:

“Profile information including names, email addresses, delivery addresses, order history, phone numbers, as well as hashed, salted passwords — a form of rendering the actual password indecipherable to third parties.”

Approximately 4.9 million Doordash customers were affected by the breach, but only those who joined the site prior to April 5, 2018. If you signed up for Doordash after that, you’re in the clear.

However, the leaked information doesn’t stop with emails, phone numbers, and names—to name a few. For a subset of those affected, the attacker was able to access the last four digits of their stored credit card, their bank account number, or their drivers’ license numbers.

Doordash is currently reaching out to those whose data might have been compromised; if you haven’t received an email yet, you might be in the clear, but it’s also taking the company a bit of time to send these, so it’s OK to be slightly anxious.

Source: Doordash’s Latest Data Breach: How to Protect Yourself

AI equal with human experts in medical diagnosis with images, study finds

Artificial intelligence is on a par with human experts when it comes to making medical diagnoses based on images, a review has found.

The potential for artificial intelligence in healthcare has caused excitement, with advocates saying it will ease the strain on resources, free up time for doctor-patient interactions and even aid the development of tailored treatment. Last month the government announced £250m of funding for a new NHS artificial intelligence laboratory.

However, experts have warned the latest findings are based on a small number of studies, since the field is littered with poor-quality research.

One burgeoning application is the use of AI in interpreting medical images – a field that relies on deep learning, a sophisticated form of machine learning in which a series of labelled images are fed into algorithms that pick out features within them and learn how to classify similar images. This approach has shown promise in diagnosis of diseases from cancers to eye conditions.

However questions remain about how such deep learning systems measure up to human skills. Now researchers say they have conducted the first comprehensive review of published studies on the issue, and found humans and machines are on a par.

Prof Alastair Denniston, at the University Hospitals Birmingham NHS foundation trust and a co-author of the study, said the results were encouraging but the study was a reality check for some of the hype about AI.

Dr Xiaoxuan Liu, the lead author of the study and from the same NHS trust, agreed. “There are a lot of headlines about AI outperforming humans, but our message is that it can at best be equivalent,” she said.

Writing in the Lancet Digital Health, Denniston, Liu and colleagues reported how they focused on research papers published since 2012 – a pivotal year for deep learning.

An initial search turned up more than 20,000 relevant studies. However, only 14 studies – all based on human disease – reported good quality data, tested the deep learning system with images from a separate dataset to the one used to train it, and showed the same images to human experts.

The team pooled the most promising results from within each of the 14 studies to reveal that deep learning systems correctly detected a disease state 87% of the time – compared with 86% for healthcare professionals – and correctly gave the all-clear 93% of the time, compared with 91% for human experts.

However, the healthcare professionals in these scenarios were not given additional patient information they would have in the real world which could steer their diagnosis.

Prof David Spiegelhalter, the chair of the Winton centre for risk and evidence communication at the University of Cambridge, said the field was awash with poor research.

“This excellent review demonstrates that the massive hype over AI in medicine obscures the lamentable quality of almost all evaluation studies,” he said. “Deep learning can be a powerful and impressive technique, but clinicians and commissioners should be asking the crucial question: what does it actually add to clinical practice?”

Source: AI equal with human experts in medical diagnosis, study finds | Technology | The Guardian

Darknet cybercrime servers hosted in former NATO bunker in Germany busted in 600 policemen operation

A cybercrime data center that was shut down by German authorities was housed inside a former NATO bunker in a sleepy riverside town, police revealed on Friday.

More than 600 law enforcement personnel including Germany’s elite federal police unit, the GSG 9, were involved in an anti-cybercrime operation that took place in the town of Traben-Trarbach on the banks of the Mosel river.

Police officers succeeded in penetrating the building, a 5,000 square meter former NATO bunker with iron doors that goes five floors deep underground. The building was located on a 1.3-hectare (3.2 acre) property secured with a fence and surveillance cameras.

“We had to overcome not only real, or analog, protections; we also cracked the digital protections of the data center,” said regional police chief Johannes Kunz.

Read more: Darknet operator gets six years in connection with 2016 German shooting rampage

The target of the operation was a so-called “bulletproof hosting” service provider. Bulletproof hosters provide IT infrastructure that protects online criminal activity from government intervention.

In the raid, police seized 200 servers along with documents, cell phones, and large quantities of cash. Thursday’s operation was the first time German investigators were able to apprehend a bulletproof hoster, according to German media outlets.

Watch video 01:35

German police claim victory against cyber crime

Cracking the security codes to access the contents of the servers was another difficult task for the police. On the servers, they found countless websites facilitating the illegal sale of drugs, weapons, counterfeit documents, and stolen data as well as sites distributing child pornography. The servers hosted Wall Street Market, formerly the second largest darknet market place for drugs in the word before law enforcement shut the platform down earlier this year.

The police arrested 13 people between the ages of 20 and 59 allegedly tied to the operation. Seven are held in custody. The ringleader is a 59-year-old Dutch man with ties to organized crime in the Netherlands. He established the server in Traben-Trarbach in 2013. While his official residency is listed in Singapore, he had been living in the bunker.

Source: Darknet cybercrime servers hosted in former NATO bunker in Germany | News | DW | 28.09.2019

GNOME is Being Sued Because Shotwell Photo Manager can wirelessly transfer images. The US Patent Office really gave a patent to transfer images and label them to a patent troll.

The GNOME Foundation is facing a lawsuit from Rothschild Patent Imaging, LLC. Rothschild allege that Shotwell, a free and open source personal photo manager infringes its patent.

Neil McGovern, Executive Director for the GNOME Foundation says “We have retained legal counsel and intend to vigorously defend against this baseless suit. Due to the ongoing litigation, we unfortunately cannot make any further comments at this time.”

While Neil cannot make any further comments on this issue, let me throw some lights on this matter.

The patent in the question deals with wireless image distribution. The patent is ridiculous because it could mean any software that transfers images from one device to another could be violating this patent.

And that’s what this lawsuit is about. If you read the lawsuit, you’ll see why Neil called it baseless:

Gnome Shotwell Lawsuite
GNOME Shotwell Lawsuit

Shotwell is not the only one being sued

I did a quick web search with “Rothschild Patent Imaging” and I couldn’t find their website. I am guessing that it doesn’t exist. However, I come across a number of “Rothschild Patent Imaging vs XYZ” lawsuits.

I dig a little deeper. As per patent litigation website RPX Insight, there are six active cases and forty two inactive cases involving Rothschild Patent Imaging.

Rothschiled Patent
Rothschild Patent Imaging Lawsuits

There are a number of companies being sued if there product mentions grouping photos based on date, location etc, facial recognition and transferring images from one device to another. Sounds crazy, right?

But it won’t be crazy if it’s someone’s full time job.

Patent Litigation Abuse aka Patent Trolling

Patent Troll Attacks Gnome Foundation

Rothschild Patent Imaging is owned by Leigh M Rothschild.

The modus operandi of ‘inventor’ Leigh M Rothschild is to get patents on obvious ideas. And that obvious idea would be so broad that they could sue a huge number of organizations. Defendants have two choices, either pay Rothschild to settle the lawsuit or pay even more to lawyers and fight the court battle.

Rothschild Patent Imaging LLC might be formed to sue companies dealing with grouping and transferring images. In 2017, Rothschild Connected Devices Innovations LLC also filed a number of patent infringement lawsuits against companies that hinted mixing drinks and connected devices.

Ars Technica called Rothschild a patent troll because he was demanding $75,000 from each defendant for settling the lawsuits.

Smaller companies might have been intimidated but when Rothschild targeted a giant like Garmin, they hit back. Rothschild backed out of the lawsuit but Garmin filed a counter and Rothschild was asked to pay the legal expenses to Garmin.

Unfortunately, patent trolling is a big business, specially in the United States of America. There are companies with the sole business model of suing other companies. They are almost exclusively based in East Texas where the laws favors such patent trolls. EFF has a dedicated page that lists the victims of patent trolls.

I am so glad that GNOME Foundation has decided to fight this lawsuit vigorously.

Source: GNOME is Being Sued Because of Shotwell Photo Manager