Late Wednesday night, Pornhub announced that PayPal is no longer supporting payments for Pornhub—a decision that will impact thousands of performers using the site as a source of income.
Most visitors to Pornhub likely think of it as a website that simply provides access to an endless supply of free porn, but Pornhub also allows performers to upload, sell, and otherwise monetize videos they make themselves. Performers who used PayPal to get paid for this work now have to switch to a different payment method.
“We are all devastated by PayPal’s decision to stop payouts to over a hundred thousand performers who rely on them for their livelihoods,” the company said on its blog. It then directed models to set up a new payment method, with instructions on how PayPal users can transfer pending payments.
“We sincerely apologize if this causes any delays and we will have staff working around the clock to make sure all payouts are processed as fast as possible on the new payment methods,” the statement said.
A PayPal spokesperson told Motherboard: “Following a review, we have discovered that Pornhub has made certain business payments through PayPal without seeking our permission. We have taken action to stop these transactions from occurring.”
PayPal is one of many payment processors that have discriminated against sex workers for years. Its acceptable use policy states that “certain sexually oriented materials or services” are forbidden—phrasing that’s intentionally vague enough to allow circumstances like this to happen whenever the company wants.
Are you a sex worker who has been impacted by this situation, or by any payment processors discriminating against your work? We’d love to hear from you. Contact Samantha Cole securely on Signal at +6469261726, direct message on Twitter, or by email.
The list of payment platforms, payment apps, and banks that forbid sexual services in their terms of use is very, very long, and includes everything from Venmo to Visa. Many of these terms have been in place for nearly a decade—and payment processors have been hostile toward sex work long before harmful legislation like the Fight Online Sex Trafficking Act came into law last year. But those laws only help to embolden companies to kick sex workers off their platforms, and make the situation even more confusing and frustrating for performers.
Researchers in Sussex have built a device that displays 3D animated objects that can talk and interact with onlookers.
A demonstration of the display showed a butterfly flapping its wings, a countdown spelled out by numbers hanging in the air, and a rotating, multicoloured planet Earth. Beyond interactive digital signs and animations, scientists want to use it to visualise and even feel data.
[…]
it uses a 3D field of ultrasound waves to levitate a polystyrene bead and whip it around at high speed to trace shapes in the air.
The 2mm-wide bead moves so fast, at speeds approaching 20mph, that it traces out the shape of an object in less than one-tenth of a second. At such a speed, the brain doesn’t see the moving bead, only the completed shape it creates. The colours are added by LEDs built into the display that shine light on the bead as it zips around.
Because the images are created in 3D space, they can be viewed from any angle. And by careful control of the ultrasonic field, the scientists can make objects speak, or add sound effects and musical accompaniments to the animated images. Further manipulation of the sound field enables users to interact with the objects and even feel them in their hands.
[…]
The images are created between two horizontal plates that are studded with small ultrasonic transducers. These create an inaudible 3D sound field that contains a tiny pocket of low pressure air that traps the polystyrene bead. Move the pocket around, by tweaking the output of the transducers, and the bead moves with it.
The most basic version of the display creates 3D colour animations, but writing in the journal Nature, the scientists describe how they improved the display to produce sounds and tactile responses to people reaching out to the image.
Speech and other sounds, such as a musical accompaniment, were added by vibrating the polystyrene bead as it hares around. The vibrations can be tuned to produce soundwaves across the entire range of human hearing, creating, for example, crisp and clear speech. Another trick makes the display tactile by manipulating the ultrasonic field to create a virtual “button” in mid-air.
The prototype uses a single bead and can create images inside a 10cm-wide cube of air. But future displays could use more powerful transducers to make larger animations, and employ multiple beads at once. Subramanian said existing computer software can be used to ensure the tiny beads do not crash into one another, although choreographing the illumination of multiple beads mid-air is another problem.
[…]
“The interesting thing about the tactile content is that it’s created using ultrasound waves. Unlike the simple vibrations most people are familiar with through smartphones or games consoles, the ultrasound waves move through the air to create precise patterns against your hands. This allows multimedia experiences where the objects you feel are just as rich and dynamic as the objects you see in the display.”
Julie Williamson, also at Glasgow, said levitating displays are a first step towards truly interactive 3D displays. “I imagine a future where 3D displays can create experiences that are indistinguishable from the physical objects they are simulating,” she said.
The fossil fuels driving climate change make people sick, and so do impacts like extreme heat, wildfires, and more extreme storms, according to research published on Wednesday. In short, the climate crisis is a public health crisis.
A new report from premiere medical journal the Lancet tallies the medical toll of climate change and finds last year saw record-setting numbers of people exposed to heat waves and a near-record spread of dengue fever globally. The scientists also crunched numbers around wildfires for the first time, finding that 77 percent of countries are facing more wildfire-induced suffering than they were at the start of the decade. But while some of the report’s findings are rage-inducing, it also shows that improving access to healthcare may be among the most beneficial ways we can adapt to climate change.
[…]
Heat waves are among the more obvious climate change-linked weather disasters, and the report outlines just how much they’re already hurting the world. Last year saw intense heat waves go off around the world from the UK to Pakistan, to Japan amid the fourth warmest year on record.
[…]
The report also found that 2018 marked the second-worst year since accurate record keeping began in 1990 for the spread of dengue fever-carrying mosquitoes. The two types of mosquitoes that transmit dengue have seen their range expand as temperatures have warmed
[…]
wildfire findings, which are new to this year’s report. Scientists found that more than three-quarters of countries around the world are seeing increased prevalence of wildfires and the sickness-inducing smoke that accompanies them.
[…]
there are also the health risks that come from burning fossil fuels themselves. Air pollution has ended up in people’s lungs where it can cause asthma and other respiratory issues, but it’s also showed up in less obvious locations like people’s brains and women’s placentas.
[…]
“We can do better than to dwell on the problem,” Gina McCarthy, the former head of the Environmental Protection Agency and current Harvard public health professor, said on the press call.
The report found, for example, that despite an uptick in heat waves and heavy downpours that can spur diarrheal diseases, outbreaks have become less common. Ditto for protein-related malnutrition despite the impact intense heat is having on the nutritional value of staple crops and ocean heat waves on coral reefs and fisheries that rely on them. At least some of that is attributable to improved access to healthcare, socioeconomic opportunities, and sanitation in some regions.
We often think about sea walls or other hard infrastructure when it comes to climate adaptation. But rural health clinics and sewer systems fall into that same category, as do programs like affordable crop insurance. The report suggests improving access to financing health-focused climate projects could pay huge dividends as a result, ensuring that people are insulated from the impacts of climate change and helping lift them out of poverty in the process. Of course it also calls for cutting carbon pollution ASAP because even the best equipped hospital in the world isn’t going to be enough to protect people from the full impacts of climate change.
Google will soon offer checking accounts to consumers, becoming the latest Silicon Valley heavyweight to push into finance. The Wall Street Journal: The project, code-named Cache, is expected to launch next year with accounts run by Citigroup and a credit union at Stanford University, a tiny lender in Google’s backyard. Big tech companies see financial services as a way to get closer to users and glean valuable data. Apple introduced a credit card this summer. Amazon.com has talked to banks about offering checking accounts. Facebook is working on a digital currency it hopes will upend global payments. Their ambitions could challenge incumbent financial-services firms, which fear losing their primacy and customers. They are also likely to stoke a reaction in Washington, where regulators are already investigating whether large technology companies have too much clout.
The tie-ups between banking and technology have sometimes been fraught. Apple irked its credit-card partner, Goldman Sachs Group, by running ads that said the card was “designed by Apple, not a bank.” Major financial companies dropped out of Facebook’s crypto project after a regulatory backlash. Google’s approach seems designed to make allies, rather than enemies, in both camps. The financial institutions’ brands, not Google’s, will be front-and-center on the accounts, an executive told The Wall Street Journal. And Google will leave the financial plumbing and compliance to the banks — activities it couldn’t do without a license anyway.
Popular health websites are sharing private, personal medical data with big tech companies, according to an investigation by the Financial Times. The data, including medical diagnoses, symptoms, prescriptions, and menstrual and fertility information, are being sold to companies like Google, Amazon, Facebook, and Oracle and smaller data brokers and advertising technology firms, like Scorecard and OpenX.
The investigation: The FT analyzed 100 health websites, including WebMD, Healthline, health insurance group Bupa, and parenting site Babycentre, and found that 79% of them dropped cookies on visitors, allowing them to be tracked by third-party companies around the internet. This was done without consent, making the practice illegal under European Union regulations. By far the most common destination for the data was Google’s advertising arm DoubleClick, which showed up in 78% of the sites the FT tested.
Responses: The FT piece contains a list of all the comments from the many companies involved. Google, for example, said that it has “strict policies preventing advertisers from using such data to target ads.” Facebook said it was conducting an investigation and would “take action” against websites “in violation of our terms.” And Amazon said: “We do not use the information from publisher websites to inform advertising audience segments.”
A window into a broken industry: This sort of rampant rule -breaking has been a dirty secret in the advertising technology industry, which is worth $200 billion globally, ever since EU countries adopted the General Data Protection Regulation in May 2018. A recent inquiry by the UK’s data regulator found that the sector is rife with illegal practices, as in this case where privacy policies did not adequately outline which data would be shared with third parties or what it would be used for. The onus is now on EU and UK authorities to act to put an end to them.
The social media giant said the number of government demands for user data increased by 16% to 128,617 demands during the first half of this year compared to the second half of last year.
That’s the highest number of government demands it has received in any reporting period since it published its first transparency report in 2013.
The U.S. government led the way with the most number of requests — 50,741 demands for user data resulting in some account or user data given to authorities in 88% of cases. Facebook said two-thirds of all the U.S. government’s requests came with a gag order, preventing the company from telling the user about the request for their data.
But Facebook said it was able to release details of 11 so-called national security letters (NSLs) for the first time after their gag provisions were lifted during the period. National security letters can compel companies to turn over non-content data at the request of the FBI. These letters are not approved by a judge, and often come with a gag order preventing their disclosure. But since the Freedom Act passed in 2015, companies have been allowed to request the lifting of those gag orders.
The report also said the social media giant had detected 67 disruptions of its services in 15 countries, compared to 53 disruptions in nine countries during the second half of last year.
And, the report said Facebook also pulled 11.6 million pieces of content, up from 5.8 million in the same period a year earlier, which Facebook said violated its policies on child nudity and sexual exploitation of children.
The social media giant also included Instagram in its report for the first time, including removing 1.68 million pieces of content during the second and third quarter of the year.
Right now, in the Netherlands there is talk about reducing the speed limit from 130kph to 100kph in order to comply to emissions goals set by the EU (and supported by NL) years ago. Because NL didn’t put into effect any necessary legislation years ago, this is now coming to bite NL in the arse and they are playing panic football.
The Dutch institute for the environment shows pretty clearly where emissions are coming from:
As you can see it makes perfect sense to do something about traffic, as it causes 6.1% of emissions. Oh wait, there’s the farming sector: that causes 46% of emissions! Why not tackle that? Well, they tried to at first, but then the farmers did an occupy of the Hague with loads of tractors (twice) and all the politicians chickened out. Because nothing determines policy like a bunch of tractors causing traffic jams. Screw the will of the people anyway.
Note: emissions expressed
relative to their values at 100 km/h, for which the value ‘1’ is assigned.
Source: EMISIA – ETC/ACM
So reducing speed from 120-100 kph should result (for diesels) in an approx 15% decrease in particulate matter, a 40% decrease in nitrogen oxides but an increase in the amount of total hydrocarbons and carbon monoxides.
For gasoline powered cars the it’s a 20% decrease in total hydrocarbons, which means that in NL, we can knock down the 6.1% of the pie generated by cars to around 4%. Yay. We don’t win much.
Now about traffic flow, because that’s what I’m here for. The Dutch claim that lowering the speed limit will decrease the amount of time spent in traffic jams. Here’s an example of two experts saying so in BNN Vara’s article Experts: Door verlaging maximumsnelheid ben je juist sneller thuis
However, if you look at their conclusion, they come straight out of one of just two studies commonly used by seemingly everyone:
It is confirmed that the lower the speed limit, the higher the occupancy to achieve a given flow. This result has been observed even for relatively high flows and low speed limits. For instance, a stable flow of 1942 veh/h/lane has been measured with the 40 km/h speed limit in force. The corresponding occupancy was 33%, doubling the typical occupancy for this flow in the absence of speed limits. This means that VSL strategies aiming to restrict the mainline flow on a freeway by using low speed limits will need to be applied carefully, avoiding conditions as the ones presented here, where speed limits have a reduced ability to limit flows. On the other hand, VSL strategies trying to get the most from the increased vehicle storage capacity of freeways under low speed limits might be rather promising. Additionally, results show that lower speed limits increase the speed differences across lanes for moderate demands. This, in turn, also increases the lane changing rate. This means that VSL strategies aiming to homogenize traffic and reduce lane changing activity might not be successful when adopting such low speed limits. In contrast, lower speed limits widen the range of flows under uniform lane flow distributions, so that, even for moderate to low demands, the under-utilization of any lane is avoided.
There are a few problems with this study: First, it’s talking about speed limits of 40, 60 and 80 kph. Nothing around the 100 – 130kph mark. Secondly, the data in the graphs actually shows a lower occupancy with a higher speed limit – which is not their conclusion!
This paper aims to evaluate optimal speed limits in traffic networks in a way that economized societal costs are incurred. In this study, experimental and field data as well as data from simulations are used to determine how speed is related to the emission of pollutants, fuel consumption, travel time, and the number of accidents. This paper also proposes a simple model to calculate the societal costs of travel and relate them to speed. As a case study, using emission test results on cars manufactured domestically and by simulating the suburban traffic flow by Aimsun software, the total societal costs of the Shiraz-Marvdasht motorway, which is one of the most traversed routes in Iran, have been estimated. The results of the study show that from a societal perspective, the optimal speed would be 73 km/h, and from a road user perspective, it would be 82 km/h (in 2011, the average speed of the passing vehicles on that motorway was 82 km/h). The experiments in this paper were run on three different vehicles with different types of fuel. In a comparative study, the results show that the calculated speed limit is lower than the optimal speed limits in Sweden, Norway, and Australia.
(Emphasis mine)
It’s a compelling study with great results, which also include accidents.
In a multi-lane motorway divided by a median barrier in Sweden, the optimal speed is 110 km/h. The speed limit is 110 km/h and the current average speed is 109 km/h. In Norway, the optimal speed from a societal perspective is 100 km/h and the speed limit is 90 km/h. The current average speed is 95 km/h [2]. In Australia, the optimum speeds on rural freeways (dual carriageway roads with grade-separated intersections) would be 110 km/h [3]. Table 3 compares the results in Elvik [2] and Cameron [3] with those of the present study.
Table 3. Optimal speed in Norway, Sweden, Australia, and Iran. Source for columns 2 and 3: Elvik [2]. Source for column 4: Cameron [3].
Norway
Sweden
Australia
Iran
Optimal speed limits (km/h) according to societal perspective
100
110
110
73
Optimal speed limits (km/h) according to road user perspective
110
120
–
82
Current speed limits (km/h)
90
110
110
110
Current mean speed of travel (km/h)
95
109
–
82
There is a significant difference between the results in Iran and those in Sweden, Norway, and Australia; this difference results from the difference in the costs between Iran and these three countries. Also, the functions of fuel consumption and pollutant emission are different.
If you look at the first graph, you can be forgiven for thinking that the optimum speed is 95 kph, as Ruud Horman (from the BNN Vara piece) seems to think. However, as the author of this study is very careful to point out, it’s a very constrained study and there are per country differences – these results are only any good for a very specific highway in a very specific country.
They come out with a whole load of pretty pictures based on the following graph:
x= intensity, y= speed.
There are quite a lot of graphs like this. So, the speed limit is 120kph (red dots) and the inttesity is 6000 (heavy) then the actual speed is likely to be around 100 kph op the A16. However if the speed limit is 130 kph with the same intensity – oh wait, it doesn’t get to the same intensity. You seem to have higher intensities more often with a speed limit of 120 kph. But if we have an intensity of around 3000 (which I guess is moderate) then you see that quite often the speed is 125 with a speed limit of 130 and around 100 with a speed limit of 120. However, with that intensity you see that there are slightly more datapoints at around 20 – 50 kph if your speed limit is 130kph than if it’s 120kph.
Oddly enough, they never added data from 100kph, of which there were (and are) plenty of roads. They also never take into account variable speed limits. The 120kph limit is based on data taken in 2012 and the 130kph limit is based on data from 2018.
Their conclusion – raising the speed limit wins you time when the roads are quiet and puts you into a traffic jam when the roads are busy – is spurious and lacks the data to be able to support it.
The conclusion is pretty tough reading but the graphs are quite clear
What they are basically saying is: we researched it pretty well and we had a look at the distribution of vehicle types. Basically, if you set a higher speed limit, people will drive faster. There is variability (the bars you see up and down the lines) so sometimes they will drive faster and somethims they will drive slower but they generally go faster on average with a higher speed limit.
Now one more argument is that the average commute is only about an hour per day. So if you go slower, you will only save a few minutes. The difference between 100 and 130kph is a 30% difference. Over an hour period (say 100 km), that’s a 21 minute difference, assuming you can travel that distance at that speed (what they call free flow conditions). Sure you’ll never get that, but over large distances you can come close. Anyway, say we halve that and say it’s a 10 minute difference. The argument becomes that this is just barely a cup of tea. But it’s 10 minutes difference EVERY WORKING DAY! Excluding weekends and holidays, you can expect to make that commute around 250 times per year, making your net loss 2500 minutes (at least), which is 41 hours or a full working week you now have to spend extra in the car!
– reducing the speed limit seems like poor populist policy to appease the farmers, look like Something is Being Done ™ and not actually get anything real to happen except piss off commuters.
According to a Verizon press release, the new Motorola Razr will clock in at the eye-popping price of $1,500 retail (still less than foldable competitors Samsung Galaxy Fold at $1,980 or more and Huawei Mate X at $2,420). Its 6.2-inch screen is ultrawide and the device packs a 16-megapixel main camera; Verizon added that when folded, the Razr’s “touchscreen Quick View external display lets you respond to notifications, take selfies, play your music, use Google Assistant, and more without having to flip open your phone.”
Graphic: Verizon
Graphic: Verizon
Slashgear has some more details on the device, including that the main display is a pOLED running at 2142 x 876 resolution while the Quick View display is a 2.7-inch OLED running at 600 x 800. Replying to text messages and emails via the external display requires using smart replies or dictation, though it will also function as a music controller and preview screen for the camera. It also has a Snapdragon 710 processor, 6GB of memory, and 128GB of storage, running Android Pie 9.
Graphic: Verizon
Downsides noted by Slashgear include no wireless charging and fast charging that caps out at 15W, as well as a 2,510 mAh battery. That’s considerably lower than the 3,000 mAh battery in Samsung’s flagship Galaxy S10 and newer iPhones, most models of which come in closer to or slightly over 3,000 mAh. Additionally, the new Razr follows other manufacturers’ leads by ditching the 3.5mm headphone jack for a USB-C connector, a decision widely reviled by consumers used to simply plugging in whatever headphones they have available at the moment. And despite Verizon’s big talk about their 5G network, the Razr will cap out at current-gen 4G LTE speeds.
The Air Force Research Laboratory demonstrated a new and ultra-responsive approach to turbine engine development with the initial testing of the Responsive Open Source Engine (ROSE) on Nov. 6, 2019, at Wright-Patterson Air Force Base.
The Aerospace Systems Directorate’s ROSE is the first turbine engine designed, assembled, and tested exclusively in-house. The entire effort, from concept initiation to testing, was executed within 13 months. This program responds to Air Force’s desire for rapid demonstration of new technologies and faster, less expensive prototypes.
“We decided the best way to make a low-cost, expendable engine was to separate the development costs from procurement costs,” said Frank Lieghley, Aerospace Systems Directorate Turbine Engine Division senior aerospace engineer and project manager. He explained that because the design and development were conducted in-house, the Air Force owns the intellectual property behind it. Therefore, once the engine is tested and qualified, the Air Force can forego the typical and often slow development process, instead opening the production opportunity to lower-cost manufacturers better able to economically produce the smaller production runs needed for new Air Force platforms.
The applications for this class of engine are many and varied, but the development and advancement of platforms that could make use of it has typically been stymied because the engines have been too expensive. Through this effort, AFRL hopes to lower the engine cost to roughly one fourth of the cheapest current alternative, an almost unheard-of price for such technology, thus enabling a new class of air vehicles that can capitalize on the less expensive engine.
[…]
by working closely with other AFRL organizations, including the Materials and Manufacturing Directorate and the Air Force Institute of Technology, the team leveraged internal expertise that helped advance the project. Additionally, by starting from scratch and performing all the work themselves, the AFRL team developed new tools and models that will be available for use in future iterations and new engine design projects.
[…]
“There’s not an Air Force engine fielded today whose technology can’t be traced back to Turbine Engine Division in-house work,” he said. “We’ll eventually hand this off to a manufacturer, but this one is all AFRL on the inside.”
The rise of the internet and the advent of social media have fundamentally changed the information ecosystem, giving the public direct access to more information than ever before. But it’s often nearly impossible to distinguish between accurate information and low-quality or false content. This means that disinformation — false or intentionally misleading information that aims to achieve an economic or political goal — can become rampant, spreading further and faster online than it ever could in another format.
As part of its Truth Decay initiative, RAND is responding to this urgent problem. Researchers identified and characterized the universe of online tools developed by nonprofits and civil society organizations to target online disinformation. The tools in this database are aimed at helping information consumers, researchers, and journalists navigate today’s challenging information environment. Researchers identified and characterized each tool on a number of dimensions, including the type of tool, the underlying technology, and the delivery format.
When you’re scrolling through Facebook’s app, the social network could be watching you back, concerned users have found. Multiple people have found and reported that their iPhone cameras were turned on in the background while they were looking at their feed.
The issue came to light through several posts on Twitter. Users noted that their cameras were activated behind Facebook’s app as they were watching videos or looking at photos on the social network.
After people clicked on the video to full screen, returning it back to normal would create a bug in which Facebook’s mobile layout was slightly shifted to the right. With the open space on the left, you could now see the phone’s camera activated in the background.
This was documented in multiple cases, with the earliest incident on Nov. 2.
It’s since been tweeted a couple other times, and CNET has also been able to replicate the issue.
John de Mol has successfully sued FB and forced them to remove fake ads in which it seems he endorses bitcoins and other cryptocurrencies (he doesn’t). They will not be allowed in the future either and FB must give him the details of the parties who placed the adverts on FB. FB is liable for fines up to EUR 1.1 million if they don’t comply.
Between Oktober 2018 and at least March 2019 a series of fake ads were placed on FB and Instagram that had him endorsing the crypto. He didn’t endorse them at all and not only that, they were a scam: the buyers never received any crypto after purchasing from the sites. The scammers received at least EUR 1.7 million.
The court did not accept FB’s argument that they are a neutral party just passing on information. The court argues that FB has a responsibility to guard against breaches of third party rights. After John de Mol had contacted FB and the ads decreased drastically in frequency shows the court that it is well within FB’s technical possibilities to guard against these breaches.
The first human vaccine against the often-fatal viral disease Ebola is now an official reality. On Monday, the European Union approved a vaccine developed by the pharmaceutical company Merck, called Ervebo.
The stage for Ervebo’s approval was set this October, when a committee assembled by the European Medicines Agency (EMA) recommended a conditional marketing authorization for the vaccine by the EU. Conditional marketing authorizations are given to new drugs or therapies that address an “unmet medical need” for patients. These drugs are approved on a quicker schedule than the typical new drug and require less clinical trial data to be collected and analyzed for approval.
In Ervebo’s case, though, the data so far seems to be overwhelmingly positive. In April, the World Health Organization revealed the preliminary results of its “ring vaccination” trials with Ervebo during the current Ebola outbreak in the Democratic Republic of Congo. Out of the nearly 100,000 people vaccinated up until that time, less than 3 percent went on to develop Ebola. These results, coupled with earlier trials dating back to the historic 2014-2015 outbreak of Ebola that killed over 10,000 people, secured Ervebo’s approval by the committee.
“Finding a vaccine as soon as possible against this terrible virus has been a priority for the international community ever since Ebola hit West Africa five years ago,” Vytenis Andriukaitis, commissioner in charge of Health and Food Safety at the EU’s European Commission, said in a statement announcing the approval. “Today’s decision is therefore a major step forward in saving lives in Africa and beyond.”
Although the marketing rights for Ervebo are held by Merck, it was originally developed by researchers from the Public Health Agency of Canada, which still maintains non-commercial rights.
The vaccine’s approval, significant as it is, won’t tangibly change things on the ground anytime soon. In October, the WHO said that licensed doses of Ervebo will not be available to the world until the middle of 2020. In the meantime, people in vulnerable areas will still have access to the vaccine through the current experimental program. Although Merck has also submitted Ervebo for approval by the Food and Drug Administration in the U.S., the agency’s final decision isn’t expected until next year as well.
IT guru Bob Gendler took to Medium last week to share a startling discovery about Apple Mail. If you have the application configured to send and receive encrypted email—messages that should be unreadable for anyone without the right decryption keys—Apple’s digital assistant goes ahead and stores your emails in plain text on your Mac’s drive.
More frustrating, you can have Siri completely disabled on your Mac, and your messages will still appear within a Mac database known as snippets.db. A process known as suggested will still comb through your emails and dump them into this plaintext database. This issue, according to Gendler, is present on multiple iterations of macOS, including the most recent Catalina and Mojave builds.
“I discovered this database and what’s stored there on July 25th and began extensively testing on multiple computers with Apple Mail set up and fully confirming this on July 29th. Later that week, I confirmed this database exists on 10.12 machines up to 10.15 and behaves the same way, storing encrypted messages unencrypted. If you have iCloud enabled and Siri enabled, I know there is some data sent to Apple to help with improving Siri, but I don’t know if that includes information from this database.”
Consider keeping Siri out of your email
While Apple is currently working on a fix for the issues Gendler raised, there are two easy ways you can ensure that your encrypted emails aren’t stored unencrypted on your Mac. First, you can disable Siri Suggestions for Mail within the “Siri” section of System Preferences.
Screenshot: David Murphy
Second, you can fire up Terminal and enter this command:
Regardless of which option you pick, you’ll want to delete the snippets.db file, as disabling Siri’s collection capabilities doesn’t automatically remove what’s already been collected (obviously). You’ll be able to find this by pulling up your Mac’s drive (Go > Computer) and doing a quick search for “snippets.db.”
Screenshot: David Murphy
Apple also told The Verge that you can also limit which apps are allowed to have Full Disk Access on your Mac—via System Preferences > Security & Privacy > Privacy tab—to ensure that they can’t access your snippets.db file. You can also turn on FileVault, which will prevent your emails from appearing as plaintext within snippets.db.
A large-scale academic study that analyzed more than 53,000 product pages on more than 11,000 online stores found widespread use of user interface “dark patterns”– practices meant to mislead customers into making purchases based on false or misleading information.
The study — presented last week at the ACM CSCW 2019 conference — found 1,818 instances of dark patterns present on 1,254 of the ∼11K shopping websites (∼11.1%) researchers scanned.
“Shopping websites that were more popular, according to Alexa rankings, were more likely to feature dark patterns,” researchers said.
But while the vast majority of UI dark patterns were meant to trick users into subscribing to newsletters or allowing broad data collection, some dark patterns were downright foul, trying to mislead users into making additional purchases, either by sneaking products into shopping carts or tricking users into believing products were about to sell out.
Of these, the research team found 234 instances, deployed across 183 websites.
Below are some of the examples of UI dark patterns that the research team found currently employed on today’s most popular online stores.
1. Sneak into basked
Adding additional products to users’ shopping carts without their consent.
Prevalence: 7 instances across 7 websites.
Image: Arunesh et al.
2. Hidden costs
Revealing previously undisclosed charges to users right before they make a purchase.
Prevalence: 5 instances across 5 websites.
Image: Arunesh et al.
3. Hidden subscription
Charging users a recurring fee under the pretense of a one-time fee or a free trial.
Prevalence: 14 instances across 13 websites.
Image: Arunesh et al.
4. Countdown timer
Indicating to users that a deal or discount will expire using a counting-down timer.
Prevalence: 393 instances across 361 websites.
Image: Arunesh et al.
5. Limited-time message
Indicating to users that a deal or sale will expire will expire soon without specifying a deadline, thus creating uncertainty.
Prevalence: 88 instances across 84 websites.
Image: Arunesh et al.
6. Confirmshaming
Using language and emotion (shame) to steer users away from making a certain choice.
Prevalence: 169 instances across 164 websites.
Image: Arunesh et al.
7. Visual interference
Using style and visual presentation to steer users to or away from certain choices.
Prevalence: 25 instances across 24 websites.
Image: Arunesh et al.
8. Trick questions
Using confusing language to steer users into making certain choices.
Prevalence: 9 instances across 9 websites.
Image: Arunesh et al.
9. Pressured selling
Pre-selecting more expensive variations of a product, or pressuring the user to accept the more expensive variations of a product and related products.
Prevalence: 67 instances across 62 websites.
Image: Arunesh et al.
10. Activity messages
Informing the user about the activity on the website (e.g., purchases, views, visits).
Prevalence: 313 instances across 264 websites.
Image: Arunesh et al.
11. Testimonials of uncertain origin
Testimonials on a product page whose origin is unclear.
Prevalence: 12 instances across 12 websites
Image: Arunesh et al.
12. Low-stock message
Indicating to users that limited quantities of a product are available, increasing its desirability.
Prevalence: 632 instances across 581 websites.
Image: Arunesh et al.
13. High-demand message
Indicating to users that a product is in high-demand and likely to sell out soon, increasing its desirability
Prevalence: 47 instances across 43 websites.
Image: Arunesh et al.
14. Hard to cancel
Making it easy for the user to sign up for a recurring subscription but cancellation requires emailing or calling customer care.
Prevalence: 31 instances across 31 websites.
Image: Arunesh et al.
15. Forced enrollment
Coercing users to create accounts or share their information to complete their tasks.
Prevalence: 6 instances across 6 websites.
Image: Arunesh et al.
The research team behind this project, made up of academics from Princeton University and the University of Chicago, expect these UI dark patterns to become even more popular in the coming years.
One reason, they said, is that there are third-party companies that currently offer dark patterns as a turnkey solution, either in the form of store extensions and plugins or on-demand store customization services.
The table below contains the list of 22 third-parties that the research team identified following their study as providers of turnkey solutions for dark pattern-like behavior.
Today’s Tesla Model 3’s lithium-ion battery pack has an estimated 168 Wh/kg. And important as this energy-per-weight ratio is for electric cars, it’s more important still for electric aircraft.
Now comes Oxis Energy, of Abingdon, UK, with a battery based on lithium-sulfur chemistry that it says can greatly increase the ratio, and do so in a product that’s safe enough for use even in an electric airplane. Specifically, a plane built by Bye Aerospace, in Englewood, Colo., whose founder, George Bye, described the project in this 2017 article for IEEE Spectrum.
The two companies said in a statement that they were beginning a one-year joint project to demonstrate feasibility. They said the Oxis battery would provide “in excess” of 500 Wh/kg, a number which appears to apply to the individual cells, rather than the battery pack, with all its packaging, power electronics, and other paraphernalia. That per-cell figure may be compared directly to the “record-breaking energy density of 260 watt-hours per kilogram” that Bye cited for the batteries his planes were using in 2017.
[…]
One reason why lithium-sulfur batteries have been on the sidelines for so long is their short life, due to degradation of the cathode during the charge-discharge cycle. Oxis expects its batteries will be able to last for 500 such cycles within the next two years. That’s about par for the course for today’s lithium-ion batteries.
Another reason is safety: Lithium-sulfur batteries have been prone to overheating. Oxis says its design incorporates a ceramic lithium sulfide as a “passivation layer,” which blocks the flow of electricity—both to prevent sudden discharge and the more insidious leakage that can cause a lithium-ion battery to slowly lose capacity even while just sitting on a shelf. Oxis also uses a non-flammable electrolyte.
Presumably there is more to Oxis’s secret sauce than these two elements: The company says it has 186 patents, with 87 more pending.
The Wall Street Journal reported Monday that the tech giant partnered with Ascension, a non-profit and Catholic health systems company, on the program code-named “Project Nightingale.” According to the Journal, Google began its initiative with Ascension last year, and it involves everything from diagnoses, lab results, birth dates, patient names, and other personal health data—all of it reportedly handed over to Google without first notifying patients or doctors. The Journal said this amounts to data on millions of Americans spanning 21 states.
“By working in partnership with leading healthcare systems like Ascension, we hope to transform the delivery of healthcare through the power of the cloud, data analytics, machine learning, and modern productivity tools—ultimately improving outcomes, reducing costs, and saving lives,” Tariq Shaukat, president of Google Cloud, said in a statement.
Beyond the alarming reality that a tech company can collect data about people without their knowledge for its own uses, the Journal noted it’s legal under the Health Insurance Portability and Accountability Act (HIPAA). When reached for comment, representatives for both companies pointed Gizmodo to a press release about the relationship—which the Journal stated was published after its report—that states: “All work related to Ascension’s engagement with Google is HIPAA compliant and underpinned by a robust data security and protection effort and adherence to Ascension’s strict requirements for data handling.”
Still, the Journal report raises concerns about whether the data handling is indeed as secure as both companies appear to think it is. Citing a source familiar with the matter as well as related documents, the paper said at least 150 employees at Google have access to a significant portion of the health data Ascension handed over on millions of people.
Google hasn’t exactly proven itself to be infallible when it comes to protecting user data. Remember when Google+ users had their data exposed and Google did nothing to alert in order to shield its own ass? Or when a Google contractor leaked more than a thousand Assistant recordings, and the company defended itself by claiming that most of its audio snippets aren’t reviewed by humans? Not exactly the kind of stuff you want to read about a company that may have your medical history on hand.
The agreement gives DeepMind access to a wide range of healthcare data on the 1.6 million patients who pass through three London hospitals run by the Royal Free NHS Trust – Barnet, Chase Farm and the Royal Free – each year. This will include information about people who are HIV-positive, for instance, as well as details of drug overdoses and abortions. The agreement also includes access to patient data from the last five years.
“The data-sharing agreement gives Google access to information on millions of NHS patients”
DeepMind announced in February that it was working with the NHS, saying it was building an app called Streams to help hospital staff monitor patients with kidney disease. But the agreement suggests that it has plans for a lot more.
This is the first we’ve heard of DeepMind getting access to historical medical records, says Sam Smith, who runs health data privacy group MedConfidential. “This is not just about kidney function. They’re getting the full data.”
The agreement clearly states that Google cannot use the data in any other part of its business. The data itself will be stored in the UK by a third party contracted by Google, not in DeepMind’s offices. DeepMind is also obliged to delete its copy of the data when the agreement expires at the end of September 2017.
All data needed
Google says that since there is no separate dataset for people with kidney conditions, it needs access to all of the data in order to run Streams effectively. In a statement, the Royal Free NHS Trust says that it “provides DeepMind with NHS patient data in accordance with strict information governance rules and for the purpose of direct clinical care only.”
While one key official has sought to blame a single individual for the system going dark, insiders warn that organizational chaos, excessive secrecy and some unusual self-regulation is as much to blame.
Combined with those problems, a battle between European organizations over the satellite system, and a delayed independent report into the July cock-up, means things aren’t looking good for Europe’s answer to America’s GPS system. A much needed shake-up may be on its way.
In mid-July, the agency in charge of the network of 26 satellites, the European Global Navigation Satellite Systems Agency (EGSA), warned of a “service degradation” but assured everyone that it would quickly be resolved.
It wasn’t resolved however, and six days later the system was not only still down but getting increasingly inaccurate, with satellites reporting that they were in completely different positions in orbit than they were supposed to be – a big problem for a system whose entire purpose is to provide state-of-the-art positional accuracy to within 20 centimeters.
Billions of organizations, individuals, phones, apps and so on from across the globe simply stopped listening to Galileo. It’s hard to imagine a bigger mess, aside from the satellites crashing down to Earth.
But despite the outage and widespread criticism over the failure of those behind Galileo to explain what was going on and why, there has been almost no information from the various space agencies and organizations involved in the project.
Ubiquiti Networks is fending off customer complaints after emitting a firmware update that caused its UniFi wireless routers to quietly phone HQ with telemetry.
It all kicked off when the US-based manufacturer confirmed that a software update released this month programmed the devices to establish secure connections back to Ubiquiti servers and report information on Wi-Fi router performance and crashes.
Ubiquiti told customers all of the information is being handled securely, and has been cleared to comply with GDPR, Europe’s data privacy rules. Punters are upset they weren’t warned of the change.
“We have started to gather crashes and other critical events strictly for the purpose of improving our products,” the hardware maker said. “Any data collected is completely anonymized, GDPR compliant, transmitted using end-to-end encryption and encrypted at rest. The collection of this data does not and should not ever impact performance of devices.”
The assurance was of little consolation to UniFi owners who bristled at the idea of any of their data being collected, particularly without any notification nor permission. In particular, enterprise customers were less than thrilled to learn diagnostic data was being exfiltrated off their network.
“Undisclosed backdooring of my network is completely unacceptable and will result in no longer recommending, using, or selling of Ubiquiti gear,” remarked one netizen using the alias Private_.
“I realize that UBNT is too big to care about the few tens of $K per year that I generate for them, but I want to formally and clearly disclose my privacy policy/EULA, so that we understand each other. This is a stealth network intrusion and I don’t/won’t accept it.”
Security researchers have discovered a vulnerability in Ring doorbells that exposed the passwords for the Wi-Fi networks to which they were connected.
Bitdefender said the Amazon-owned doorbell was sending owners’ Wi-Fi passwords in cleartext as the doorbell joins the local network, allowing nearby hackers to intercept the Wi-Fi password and gain access to the network to launch larger attacks or conduct surveillance.
“When first configuring the device, the smartphone app must send the wireless network credentials. This takes place in an unsecure manner, through an unprotected access point,” said Bitdefender. “Once this network is up, the app connects to it automatically, queries the device, then sends the credentials to the local network.”
But all of this is carried out over an unencrypted connection, exposing the Wi-Fi password that is sent over the air.
Amazon fixed the vulnerability in all Ring devices in September, but the vulnerability was only disclosed today.
The US Department of Homeland Security (DHS) expects to have face, fingerprint, and iris scans of at least 259 million people in its biometrics database by 2022, according to a recent presentation from the agency’s Office of Procurement Operations reviewed by Quartz.
That’s about 40 million more than the agency’s 2017 projections, which estimated 220 million unique identities by 2022, according to previous figures cited by the Electronic Frontier Foundation (EFF), a San Francisco-based privacy rights nonprofit.
A slide deck, shared with attendees at an Oct. 30 DHS industry day, includes a breakdown of what its systems currently contain, as well as an estimate of what the next few years will bring. The agency is transitioning from a legacy system called IDENT to a cloud-based system (hosted by Amazon Web Services) known as Homeland Advanced Recognition Technology, or HART. The biometrics collection maintained by DHS is the world’s second-largest, behind only India’s countrywide biometric ID network in size. The traveler data kept by DHS is shared with other US agencies, state and local law enforcement, as well as foreign governments.
The first two stages of the HART system are being developed by US defense contractor Northrop Grumman, which won the $95 million contract in February 2018. DHS wasn’t immediately available to comment on its plans for its database.
[…]
Last month’s DHS presentation describes IDENT as an “operational biometric system for rapid identification and verification of subjects using fingerprints, iris, and face modalities.” The new HART database, it says, “builds upon the foundational functionality within IDENT,” to include voice data, DNA profiles, “scars, marks, and tattoos,” and the as-yet undefined “other biometric modalities as required.” EFF researchers caution some of the data will be “highly subjective,” such as information gleaned during “officer encounters” and analysis of people’s “relationship patterns.”
EFF worries that such tracking “will chill and deter people from exercising their First Amendment protected rights to speak, assemble, and associate,” since such specific data points could be used to identify “political affiliations, religious activities, and familial and friendly relationships.”
[…]
EFF researchers said in a 2018 blog post that facial-recognition software, like what the DHS is using, is “frequently…inaccurate and unreliable.” DHS’s own tests found the systems “falsely rejected as many as 1 in 25 travelers,” according to EFF, which calls out potential foreign partners in countries such as the UK, where false-positives can reportedly reach as high as 98%. Women and people of color are misidentified at rates significantly higher than whites and men, and darker skin tones increase one’s chances of being improperly flagged.
“DHS is also partnering with airlines and other third parties to collect face images from travelers entering and leaving the US,” the EFF said. “When combined with data from other government agencies, these troubling collection practices will allow DHS to build a database large enough to identify and track all people in public places, without their knowledge—not just in places the agency oversees, like airports, but anywhere there are cameras.”
New research from a duo of environmental engineers at Drexel University is suggesting the decades-old claim that house plants improve indoor air quality is entirely wrong. Evaluating 30 years of studies, the research concludes it would take hundreds of plants in a small space to even come close to the air purifying effects of simply opening a couple of windows.
Back in 1989 an incredibly influential NASA study discovered a number of common indoor plants could effectively remove volatile organic compounds (VOCs) from the air. The experiment, ostensibly conducted to investigate whether plants could assist in purifying the air on space stations, gave birth to the idea of plants in home and office environments helping clear the air.
Since then, a number of experimental studies have seemed to verify NASA’a findings that plants do remove VOCs from indoor environments. Professor of architectural and environmental engineering at Drexel University Michael Waring, and one of his PhD students, Bryan Cummings, were skeptical of this common consensus. The problem they saw was that the vast majority of these experiments were not conducted in real-world environments.
“Typical for these studies a potted plant was placed in a sealed chamber (often with a volume of a cubic meter or smaller), into which a single VOC was injected, and its decay was tracked over the course of many hours or days,” the duo writes in their study.
To better understand exactly how well potted plants can remove VOCs from indoor environments, the researchers reviewed the data from a dozen published experiments. They evaluated the efficacy of a plant’s ability to remove VOCs from the air using a metric called CADR, or clean air delivery rate.
“The CADR is the standard metric used for scientific study of the impacts of air purifiers on indoor environments,” says Waring, “but many of the researchers conducting these studies were not looking at them from an environmental engineering perspective and did not understand how building air exchange rates interplay with the plants to affect indoor air quality.”
Once the researchers had calculated the rate at which plants dissipated VOCs in each study they quickly discovered that the effect of plants on air quality in real-world scenarios was essentially irrelevant. Air handling systems in big buildings were found to be significantly more effective in dissipating VOCs in indoor environments. In fact, to clear VOCs from just one square meter (10.7 sq ft) of floor space would take up to 1,000 plants, or just the standard outdoor-to-indoor air exchange systems that already exist in most large buildings.
In modern cities, we’re constantly surveilled through CCTV cameras in both public and private spaces, and by companies trying to sell us shit based on everything we do. We are always being watched.
But what if a simple T-shirt could make you invisible to commercial AIs trying to spot humans?
A team of researchers from Northeastern University, IBM, and MIT developed a T-shirt design that hides the wearer from image recognition systems by confusing the algorithms trying to spot people into thinking they’re invisible.
[…]
A T-shirt is a low-barrier way to move around the world unnoticed by AI watchers. Previously, researchers have tried to create adversarial fashion using patches attached to stiff cardboard, so that the design doesn’t distort on soft fabric while the wearer moves. If the design is warped or part of it isn’t visible, it becomes ineffective.
No one’s going to start carrying cardboard patches around, and most of us probably won’t put Juggalo paint on our faces (at least not until everyone’s doing it), so the researchers came up with an approach to account for the ways that moving cloth distorts an image when generating an adversarial design to print on a shirt. As a result, the new shirt allows the wearer to move naturally while (mostly) hiding the person.
It would be easy to dismiss this sort of thing as too far-fetched to become reality. But as more cities around the country push back against facial recognition in their communities, it’s not hard to imagine some kind of hypebeast Supreme x MIT collab featuring adversarial tees to fool people-detectors in the future. Security professional Kate Rose’s shirts that fool Automatic License Plate Readers, for example, are for sale and walking amongst us already.
The mind-bending calculations required to predict how three heavenly bodies orbit each other have baffled physicists since the time of Sir Isaac Newton. Now artificial intelligence (A.I.) has shown that it can solve the problem in a fraction of the time required by previous approaches.
Newton was the first to formulate the problem in the 17th century, but finding a simple way to solve it has proved incredibly difficult. The gravitational interactions between three celestial objects like planets, stars and moons result in a chaotic system — one that is complex and highly sensitive to the starting positions of each body.
[…]
The algorithm they built provided accurate solutions up to 100 million times faster than the most advanced software program, known as Brutus.
[…]
Neural networks must be trained by being fed data before they can make predictions. So the researchers had to generate 9,900 simplified three-body scenarios using Brutus, the current leader when it comes to solving three-body problems.
They then tested how well the neural net could predict the evolution of 5,000 unseen scenarios, and found its results closely matched those of Brutus. However, the A.I.-based program solved the problems in an average of just a fraction of a second, compared with nearly 2 minutes.
The reason programs like Brutus are so slow is that they solve the problem by brute force, said Foley, carrying out calculations for each tiny step of the celestial bodies’ trajectories. The neural net, on the other hand, simply looks at the movements those calculations produce and deduces a pattern that can help predict how future scenarios will play out.
That presents a problem for scaling the system up, though, Foley said. The current algorithm is a proof-of-concept and learned from simplified scenarios, but training on more complex ones or even increasing the number of bodies involved to four of five first requires you to generate the data on Brutus, which can be extremely time-consuming and expensive.