The rise of the internet and the advent of social media have fundamentally changed the information ecosystem, giving the public direct access to more information than ever before. But it’s often nearly impossible to distinguish between accurate information and low-quality or false content. This means that disinformation — false or intentionally misleading information that aims to achieve an economic or political goal — can become rampant, spreading further and faster online than it ever could in another format.
As part of its Truth Decay initiative, RAND is responding to this urgent problem. Researchers identified and characterized the universe of online tools developed by nonprofits and civil society organizations to target online disinformation. The tools in this database are aimed at helping information consumers, researchers, and journalists navigate today’s challenging information environment. Researchers identified and characterized each tool on a number of dimensions, including the type of tool, the underlying technology, and the delivery format.
When you’re scrolling through Facebook’s app, the social network could be watching you back, concerned users have found. Multiple people have found and reported that their iPhone cameras were turned on in the background while they were looking at their feed.
The issue came to light through several posts on Twitter. Users noted that their cameras were activated behind Facebook’s app as they were watching videos or looking at photos on the social network.
After people clicked on the video to full screen, returning it back to normal would create a bug in which Facebook’s mobile layout was slightly shifted to the right. With the open space on the left, you could now see the phone’s camera activated in the background.
This was documented in multiple cases, with the earliest incident on Nov. 2.
It’s since been tweeted a couple other times, and CNET has also been able to replicate the issue.
John de Mol has successfully sued FB and forced them to remove fake ads in which it seems he endorses bitcoins and other cryptocurrencies (he doesn’t). They will not be allowed in the future either and FB must give him the details of the parties who placed the adverts on FB. FB is liable for fines up to EUR 1.1 million if they don’t comply.
Between Oktober 2018 and at least March 2019 a series of fake ads were placed on FB and Instagram that had him endorsing the crypto. He didn’t endorse them at all and not only that, they were a scam: the buyers never received any crypto after purchasing from the sites. The scammers received at least EUR 1.7 million.
The court did not accept FB’s argument that they are a neutral party just passing on information. The court argues that FB has a responsibility to guard against breaches of third party rights. After John de Mol had contacted FB and the ads decreased drastically in frequency shows the court that it is well within FB’s technical possibilities to guard against these breaches.
The first human vaccine against the often-fatal viral disease Ebola is now an official reality. On Monday, the European Union approved a vaccine developed by the pharmaceutical company Merck, called Ervebo.
The stage for Ervebo’s approval was set this October, when a committee assembled by the European Medicines Agency (EMA) recommended a conditional marketing authorization for the vaccine by the EU. Conditional marketing authorizations are given to new drugs or therapies that address an “unmet medical need” for patients. These drugs are approved on a quicker schedule than the typical new drug and require less clinical trial data to be collected and analyzed for approval.
In Ervebo’s case, though, the data so far seems to be overwhelmingly positive. In April, the World Health Organization revealed the preliminary results of its “ring vaccination” trials with Ervebo during the current Ebola outbreak in the Democratic Republic of Congo. Out of the nearly 100,000 people vaccinated up until that time, less than 3 percent went on to develop Ebola. These results, coupled with earlier trials dating back to the historic 2014-2015 outbreak of Ebola that killed over 10,000 people, secured Ervebo’s approval by the committee.
“Finding a vaccine as soon as possible against this terrible virus has been a priority for the international community ever since Ebola hit West Africa five years ago,” Vytenis Andriukaitis, commissioner in charge of Health and Food Safety at the EU’s European Commission, said in a statement announcing the approval. “Today’s decision is therefore a major step forward in saving lives in Africa and beyond.”
Although the marketing rights for Ervebo are held by Merck, it was originally developed by researchers from the Public Health Agency of Canada, which still maintains non-commercial rights.
The vaccine’s approval, significant as it is, won’t tangibly change things on the ground anytime soon. In October, the WHO said that licensed doses of Ervebo will not be available to the world until the middle of 2020. In the meantime, people in vulnerable areas will still have access to the vaccine through the current experimental program. Although Merck has also submitted Ervebo for approval by the Food and Drug Administration in the U.S., the agency’s final decision isn’t expected until next year as well.
IT guru Bob Gendler took to Medium last week to share a startling discovery about Apple Mail. If you have the application configured to send and receive encrypted email—messages that should be unreadable for anyone without the right decryption keys—Apple’s digital assistant goes ahead and stores your emails in plain text on your Mac’s drive.
More frustrating, you can have Siri completely disabled on your Mac, and your messages will still appear within a Mac database known as snippets.db. A process known as suggested will still comb through your emails and dump them into this plaintext database. This issue, according to Gendler, is present on multiple iterations of macOS, including the most recent Catalina and Mojave builds.
“I discovered this database and what’s stored there on July 25th and began extensively testing on multiple computers with Apple Mail set up and fully confirming this on July 29th. Later that week, I confirmed this database exists on 10.12 machines up to 10.15 and behaves the same way, storing encrypted messages unencrypted. If you have iCloud enabled and Siri enabled, I know there is some data sent to Apple to help with improving Siri, but I don’t know if that includes information from this database.”
Consider keeping Siri out of your email
While Apple is currently working on a fix for the issues Gendler raised, there are two easy ways you can ensure that your encrypted emails aren’t stored unencrypted on your Mac. First, you can disable Siri Suggestions for Mail within the “Siri” section of System Preferences.
Screenshot: David Murphy
Second, you can fire up Terminal and enter this command:
Regardless of which option you pick, you’ll want to delete the snippets.db file, as disabling Siri’s collection capabilities doesn’t automatically remove what’s already been collected (obviously). You’ll be able to find this by pulling up your Mac’s drive (Go > Computer) and doing a quick search for “snippets.db.”
Screenshot: David Murphy
Apple also told The Verge that you can also limit which apps are allowed to have Full Disk Access on your Mac—via System Preferences > Security & Privacy > Privacy tab—to ensure that they can’t access your snippets.db file. You can also turn on FileVault, which will prevent your emails from appearing as plaintext within snippets.db.
A large-scale academic study that analyzed more than 53,000 product pages on more than 11,000 online stores found widespread use of user interface “dark patterns”– practices meant to mislead customers into making purchases based on false or misleading information.
The study — presented last week at the ACM CSCW 2019 conference — found 1,818 instances of dark patterns present on 1,254 of the ∼11K shopping websites (∼11.1%) researchers scanned.
“Shopping websites that were more popular, according to Alexa rankings, were more likely to feature dark patterns,” researchers said.
But while the vast majority of UI dark patterns were meant to trick users into subscribing to newsletters or allowing broad data collection, some dark patterns were downright foul, trying to mislead users into making additional purchases, either by sneaking products into shopping carts or tricking users into believing products were about to sell out.
Of these, the research team found 234 instances, deployed across 183 websites.
Below are some of the examples of UI dark patterns that the research team found currently employed on today’s most popular online stores.
1. Sneak into basked
Adding additional products to users’ shopping carts without their consent.
Prevalence: 7 instances across 7 websites.
Image: Arunesh et al.
2. Hidden costs
Revealing previously undisclosed charges to users right before they make a purchase.
Prevalence: 5 instances across 5 websites.
Image: Arunesh et al.
3. Hidden subscription
Charging users a recurring fee under the pretense of a one-time fee or a free trial.
Prevalence: 14 instances across 13 websites.
Image: Arunesh et al.
4. Countdown timer
Indicating to users that a deal or discount will expire using a counting-down timer.
Prevalence: 393 instances across 361 websites.
Image: Arunesh et al.
5. Limited-time message
Indicating to users that a deal or sale will expire will expire soon without specifying a deadline, thus creating uncertainty.
Prevalence: 88 instances across 84 websites.
Image: Arunesh et al.
6. Confirmshaming
Using language and emotion (shame) to steer users away from making a certain choice.
Prevalence: 169 instances across 164 websites.
Image: Arunesh et al.
7. Visual interference
Using style and visual presentation to steer users to or away from certain choices.
Prevalence: 25 instances across 24 websites.
Image: Arunesh et al.
8. Trick questions
Using confusing language to steer users into making certain choices.
Prevalence: 9 instances across 9 websites.
Image: Arunesh et al.
9. Pressured selling
Pre-selecting more expensive variations of a product, or pressuring the user to accept the more expensive variations of a product and related products.
Prevalence: 67 instances across 62 websites.
Image: Arunesh et al.
10. Activity messages
Informing the user about the activity on the website (e.g., purchases, views, visits).
Prevalence: 313 instances across 264 websites.
Image: Arunesh et al.
11. Testimonials of uncertain origin
Testimonials on a product page whose origin is unclear.
Prevalence: 12 instances across 12 websites
Image: Arunesh et al.
12. Low-stock message
Indicating to users that limited quantities of a product are available, increasing its desirability.
Prevalence: 632 instances across 581 websites.
Image: Arunesh et al.
13. High-demand message
Indicating to users that a product is in high-demand and likely to sell out soon, increasing its desirability
Prevalence: 47 instances across 43 websites.
Image: Arunesh et al.
14. Hard to cancel
Making it easy for the user to sign up for a recurring subscription but cancellation requires emailing or calling customer care.
Prevalence: 31 instances across 31 websites.
Image: Arunesh et al.
15. Forced enrollment
Coercing users to create accounts or share their information to complete their tasks.
Prevalence: 6 instances across 6 websites.
Image: Arunesh et al.
The research team behind this project, made up of academics from Princeton University and the University of Chicago, expect these UI dark patterns to become even more popular in the coming years.
One reason, they said, is that there are third-party companies that currently offer dark patterns as a turnkey solution, either in the form of store extensions and plugins or on-demand store customization services.
The table below contains the list of 22 third-parties that the research team identified following their study as providers of turnkey solutions for dark pattern-like behavior.
Today’s Tesla Model 3’s lithium-ion battery pack has an estimated 168 Wh/kg. And important as this energy-per-weight ratio is for electric cars, it’s more important still for electric aircraft.
Now comes Oxis Energy, of Abingdon, UK, with a battery based on lithium-sulfur chemistry that it says can greatly increase the ratio, and do so in a product that’s safe enough for use even in an electric airplane. Specifically, a plane built by Bye Aerospace, in Englewood, Colo., whose founder, George Bye, described the project in this 2017 article for IEEE Spectrum.
The two companies said in a statement that they were beginning a one-year joint project to demonstrate feasibility. They said the Oxis battery would provide “in excess” of 500 Wh/kg, a number which appears to apply to the individual cells, rather than the battery pack, with all its packaging, power electronics, and other paraphernalia. That per-cell figure may be compared directly to the “record-breaking energy density of 260 watt-hours per kilogram” that Bye cited for the batteries his planes were using in 2017.
[…]
One reason why lithium-sulfur batteries have been on the sidelines for so long is their short life, due to degradation of the cathode during the charge-discharge cycle. Oxis expects its batteries will be able to last for 500 such cycles within the next two years. That’s about par for the course for today’s lithium-ion batteries.
Another reason is safety: Lithium-sulfur batteries have been prone to overheating. Oxis says its design incorporates a ceramic lithium sulfide as a “passivation layer,” which blocks the flow of electricity—both to prevent sudden discharge and the more insidious leakage that can cause a lithium-ion battery to slowly lose capacity even while just sitting on a shelf. Oxis also uses a non-flammable electrolyte.
Presumably there is more to Oxis’s secret sauce than these two elements: The company says it has 186 patents, with 87 more pending.
The Wall Street Journal reported Monday that the tech giant partnered with Ascension, a non-profit and Catholic health systems company, on the program code-named “Project Nightingale.” According to the Journal, Google began its initiative with Ascension last year, and it involves everything from diagnoses, lab results, birth dates, patient names, and other personal health data—all of it reportedly handed over to Google without first notifying patients or doctors. The Journal said this amounts to data on millions of Americans spanning 21 states.
“By working in partnership with leading healthcare systems like Ascension, we hope to transform the delivery of healthcare through the power of the cloud, data analytics, machine learning, and modern productivity tools—ultimately improving outcomes, reducing costs, and saving lives,” Tariq Shaukat, president of Google Cloud, said in a statement.
Beyond the alarming reality that a tech company can collect data about people without their knowledge for its own uses, the Journal noted it’s legal under the Health Insurance Portability and Accountability Act (HIPAA). When reached for comment, representatives for both companies pointed Gizmodo to a press release about the relationship—which the Journal stated was published after its report—that states: “All work related to Ascension’s engagement with Google is HIPAA compliant and underpinned by a robust data security and protection effort and adherence to Ascension’s strict requirements for data handling.”
Still, the Journal report raises concerns about whether the data handling is indeed as secure as both companies appear to think it is. Citing a source familiar with the matter as well as related documents, the paper said at least 150 employees at Google have access to a significant portion of the health data Ascension handed over on millions of people.
Google hasn’t exactly proven itself to be infallible when it comes to protecting user data. Remember when Google+ users had their data exposed and Google did nothing to alert in order to shield its own ass? Or when a Google contractor leaked more than a thousand Assistant recordings, and the company defended itself by claiming that most of its audio snippets aren’t reviewed by humans? Not exactly the kind of stuff you want to read about a company that may have your medical history on hand.
The agreement gives DeepMind access to a wide range of healthcare data on the 1.6 million patients who pass through three London hospitals run by the Royal Free NHS Trust – Barnet, Chase Farm and the Royal Free – each year. This will include information about people who are HIV-positive, for instance, as well as details of drug overdoses and abortions. The agreement also includes access to patient data from the last five years.
“The data-sharing agreement gives Google access to information on millions of NHS patients”
DeepMind announced in February that it was working with the NHS, saying it was building an app called Streams to help hospital staff monitor patients with kidney disease. But the agreement suggests that it has plans for a lot more.
This is the first we’ve heard of DeepMind getting access to historical medical records, says Sam Smith, who runs health data privacy group MedConfidential. “This is not just about kidney function. They’re getting the full data.”
The agreement clearly states that Google cannot use the data in any other part of its business. The data itself will be stored in the UK by a third party contracted by Google, not in DeepMind’s offices. DeepMind is also obliged to delete its copy of the data when the agreement expires at the end of September 2017.
All data needed
Google says that since there is no separate dataset for people with kidney conditions, it needs access to all of the data in order to run Streams effectively. In a statement, the Royal Free NHS Trust says that it “provides DeepMind with NHS patient data in accordance with strict information governance rules and for the purpose of direct clinical care only.”