Kings College London breached the General Data Protection Regulations when it shared a list of student activists with the police and barred the activists from campus during a visit by the Queen, an independent report (PDF) has found.
Some 13 students and one member of staff were unable to access any of the campus sites as their cards had been deactivated to prevent access to the Bush House site, which was opened by the Queen on March 19.
In foreword to the report, Professor Evelyn Welch, acting principal at KCL said the university accepts the findings and recommendations in full and is putting in place a plan to address all the issues raised.
One of the findings of the report is that we have breached our own policies regarding protection of personal information and the GDPR regulations. Following the event, we informed the Information Commissioner’s Office that we were undertaking this review. We have now shared the report with them and await their response.
The report also contains recommendations about our security arrangements which we will follow as we bring our operations in house and a new Head of Security joins us.
Welch said that while some have interpreted the actions taken on the day as racial profiling, “this was not the case and I want to reiterate that discrimination on any grounds is unacceptable and is damaging to our community.”
The report’s author, Laura Gibbs, concluded that the security team had “overstepped the boundaries” when it compiled the list of activists and shared it with the Met Police.
She said “the barring of individuals against whom there was neither evidence of criminal activity nor any internal disciplinary findings, from “their campus” was disproportionate and “against King’s stated values.”
One student was blocked from entering a KCL building for an exam in south London, and was only able to enter when the on-site security staff reinstated the card.
An industry group of internet service providers has branded Firefox browser maker Mozilla an “internet villain” for supporting a DNS security standard.
The U.K.’s Internet Services Providers’ Association (ISPA), the trade group for U.K. internet service providers, nominated the browser maker for its proposed effort to roll out the security feature, which they say will allow users to “bypass UK filtering obligations and parental controls, undermining internet safety standards in the UK.”
Mozilla said late last year it was planning to test DNS-over-HTTPS to a small number of users.
Whenever you visit a website — even if it’s HTTPS enabled — the DNS query that converts the web address into an IP address that computers can read is usually unencrypted. The security standard is implemented at the app level, making Mozilla the first browser to use DNS-over-HTTPS. By encrypting the DNS query it also protects the DNS request against man-in-the-middle attacks, which allow attackers to hijack the request and point victims to a malicious page instead.
DNS-over-HTTPS also improves performance, making DNS queries — and the overall browsing experience — faster.
But the ISPA doesn’t think DNS-over-HTTPS is compatible with the U.K.’s current website blocking regime.
Under U.K. law, websites can be blocked for facilitating the infringement of copyrighted or trademarked material or if they are deemed to contain terrorist material or child abuse imagery. In encrypting DNS queries, it’s claimed that it will make it more difficult for internet providers to filter their subscribers’ internet access.
The ISPA isn’t alone. U.K. spy agency GCHQ and the Internet Watch Foundation, which maintains the U.K.’s internet blocklist, have criticized the move to roll out encrypted DNS features to the browser.
The ISPA’s nomination quickly drew ire from the security community. Amid a backlash on social media, the ISPA doubled down on its position. “Bringing in DNS-over-HTTPS by default would be harmful for online safety, cybersecurity and consumer choice,” but said it encourages “further debate.”
One internet provider, Andrews & Arnold, donated £2,940 — around $3,670 — to Mozilla in support of the nonprofit. “The amount was chosen because that is what our fee for ISPA membership would have been, were we a member,” said a tweet from the company.
Mozilla spokesperson Justin O’Kelly told TechCrunch: “We’re surprised and disappointed that an industry association for ISPs decided to misrepresent an improvement to decades old internet infrastructure.”
“Despite claims to the contrary, a more private DNS would not prevent the use of content filtering or parental controls in the UK. DNS-over-HTTPS (DoH) would offer real security benefits to UK citizens. Our goal is to build a more secure internet, and we continue to have a serious, constructive conversation with credible stakeholders in the UK about how to do that,” he said.
“We have no current plans to enable DNS-over-HTTPS by default in the U.K. However, we are currently exploring potential DNS-over-HTTPS partners in Europe to bring this important security feature to other Europeans more broadly,” he added.
Mozilla isn’t the first to roll out DNS-over-HTTPS. Last year Cloudflare released a mobile version of its 1.1.1.1 privacy-focused DNS service to include DNS-over-HTTPS. Months earlier, Google-owned Jigsaw released its censorship-busting app Infra, which aimed to prevent DNS manipulation.
Mozilla has yet to set a date for the full release of DNS-over-HTTPS in Firefox.
Before Google, Facebook and Amazon, tech dominance was known by a single name: Microsoft.
And no product was more dominant than Microsoft’s web browser, Internet Explorer. The company’s browser was the gateway to the internet for about95 percent of users in the early 2000s, which helped land Microsoft at the center of a major government effort to break up the company.
Almost two decades later, Google’s Chrome now reigns as the biggest browser on the block, and the company is facing challenges similar to Microsoft’s from competitors, as well as government scrutiny.
But Google faces a new wrinkle — a growing realization among consumers that their every digital move is tracked.
“I think Cambridge Analytica acted as a catalyst to get people aware that their data could be used in ways they didn’t expect,” said Peter Dolanjski, the product lead for Mozilla’s Firefox web browser, referring to the scandal in which a political consulting firm obtained data on millions of Facebook users and their friends.
[…]
Web browsers, being the primary way the vast majority of people experience the internet, are a crucial choke point in the digital ecosystem. While the browsers are free to users, the companies that operate them can have an outsized impact on how the internet works — especially if they gain a dominant market position. For a company like Google, which makes most of its money from online advertising, that has meant being able to liberally collect user data. For a nonprofit like Mozilla, more users means the chance to convince developers and other tech companies to adopt their privacy-focused standards.
[…]
Chrome, with more than 60 percent market share worldwide, is yet another source of complaints about Google’s power, after its search engine and advertisement businesses. Last year, Chrome changed the system for logging in to the browser, a move that one researcher said could allow Google to collect data much more easily.
Firefox trails Microsoft in corporate size and influence, but it is pressing other browsers on privacy and playing up its status as a nonprofit. Last month, Firefox changed the initial settings for new users so that third-party tracking “cookies” such as those used for ad purposes are blocked — meaning the default is no tracking.
[…]
A technology columnist at the Post wrote in a scathing review last month that he was switching from Chrome to Firefox, calling Google’s product “a lot like surveillance software.” In a week of desktop websurfing, the columnist, Geoffrey Fowler, wrote that he discovered 11,189 requests for tracker cookies that were blocked by Firefox but would have been allowed by Chrome.
[…]
The browser fight has become heated enough to worry the advertising and media industries. Advertisers have become used to filling up websites with sometimes dozens of “cookies” and other forms of online tracking, and they fear a wider backlash against personalized, data-driven ads.
[…]
For now, there are few signs that Google’s browser dominance will end anytime soon, but the tech industry is riddled with examples of companies that appeared to be invincible just before their fall, including with web browsers.
Google and other tech companies have been under fire recently for a variety of issues, including failing to protect user data, failing to disclose how data is collected and used and failing to police the content posted to their services.
[…]
n May, I wrote up something weird I spotted on Google’s account management page. I noticed that Google uses Gmail to store a list of everything you’ve purchased, if you used Gmail or your Gmail address in any part of the transaction.
If you have a confirmation for a prescription you picked up at a pharmacy that went into your Gmail account, Google logs it. If you have a receipt from Macy’s, Google keeps it. If you bought food for delivery and the receipt went to your Gmail, Google stores that, too.
You get the idea, and you can see your own purchase history by going to Google’s Purchases page.
Google says it does this so you can use Google Assistant to track packages or reorder things, even if that’s not an option for some purchases that aren’t mailed or wouldn’t be reordered, like something you bought a store.
At the time of my original story, Google said users can delete everything by tapping into a purchase and removing the Gmail. It seemed to work if you did this for each purchase, one by one. This isn’t easy — for years worth of purchases, this would take hours or even days of time.
So, since Google doesn’t let you bulk-delete this purchases list, I decided to delete everything in my Gmail inbox. That meant removing every last message I’ve sent or received since I opened my Gmail account more than a decade ago.
Despite Google’s assurances, it didn’t work.
ike a horror movie villain that just won’t die
On Friday, three weeks after I deleted every Gmail, I checked my purchases list.
I still see receipts for things I bought years ago. Prescriptions, food deliveries, books I bought on Amazon, music I purchased from iTunes, a subscription to Xbox Live I bought from Microsoft — it’s all there.
A list of my purchases Google pulled in from Gmail.
Todd Haselton | CNBC
Google continues to show me purchases I’ve made recently, too.
Over ten million users have been duped in installing a fake Samsung app named “Updates for Samsung” that promises firmware updates, but, in reality, redirects users to an ad-filled website and charges for firmware downloads.
The app takes advantage of the difficulty in getting firmware and operating system updates for Samsung phones, hence the high number of users who have installed it.
“It would be wrong to judge people for mistakenly going to the official application store for the firmware updates after buying a new Android device,” the security researcher said. “Vendors frequently bundle their Android OS builds with an intimidating number of software, and it can easily get confusing.”
“A user can feel a bit lost about the [system] update procedure. Hence can make a mistake of going to the official application store to look for system update.”
The “Updates for Samsung” app promises to solve this problem for non-technical users by providing a centralized location where Samsung phone owners can get their firmware and OS updates.
But according to Kuprins, this is a ruse. The app, which has no affiliation to Samsung, only loads the updato[.]com domain in a WebView (Android browser) component.
Rummaging through the app’s reviews, one can see hundreds of users complaining that the site is an ad-infested hellhole where most of them can’t find what they’re looking — and that’s only when the app works and doesn’t crash.
The site does offer both free and paid (legitimate) Samsung firmware updates, but after digging through the app’s source code, Kuprins said the website limits the speed of free downloads to 56 KBps, and some free firmware downloads eventually end up timing out.
“During our tests, we too have observed that the downloads don’t finish, even when using a reliable network,” Kuprins said.
But by crashing all free downloads, the app pushes users to purchase a $34.99 premium package to be able to download any files.
Almost a third (30%) of the world’s top virtual private network (VPN) providers are secretly owned by six Chinese companies, according to a study by privacy and security research firm VPNpro.
The study shows that the top 97 VPNs are run by just 23 parent companies, many of which are based in countries with lax privacy laws.
Six of these companies are based in China and collectively offer 29 VPN services, but in many cases, information on the parent company is hidden to consumers.
Researchers at VPNpro have pieced together ownership information through company listings, geolocation data, the CVs of employees and other documentation.
In some instances, ownership of different VPNs is split amongst a number of subsidiaries. For example, Chinese company Innovative Connecting owns three separate businesses that produce VPN apps: Autumn Breeze 2018, Lemon Cove and All Connected. In total, Innovative Connecting produces 10 seemingly unconnected VPN products, the study shows.
Although the ownership of a number of VPN services by one company is not unusual, VPNpro is concerned that so many are based in countries with lax or non-existence privacy laws.
For example, seven of the top VPN services are owned by Gaditek, based in Pakistan. This means the Pakistani government can legally access any data without a warrant and data can also be freely handed over to foreign institutions, according to VPNpro.
The ability to access the data held by VPN providers, the researchers said, could enable governments or other organisations to identify users and their activity online. This potentially puts human rights activists, privacy advocates, investigative journalists and whistleblowers in jeopardy.
This lack of privacy, the study notes, extends to ordinary consumers, who are also coming under greater government surveillance.
“We’re not accusing any of these companies of doing anything underhand. However, we are concerned that so many VPN providers are not fully transparent about who owns them and where they are based,” said Laura Kornelija Inamedinova, research analyst at VPNpro.
Amazon wants the U.S. Federal Communications Commission (FCC) to give it the go-ahead to launch 3,236 satellites that would be used to establish a globe-spanning internet network. Seeking Alpha reported that Amazon expects “to offer service to tens of millions of underserved customers around the world” via the network, which the company is developing under the code-name Project Kuiper.
News of Project Kuiper broke in April, when Amazon uncharacteristically confirmed its work on the project to GeekWire. The company often declines to comment on reports concerning its plans; it seems the development of thousands of internet-providing satellites is the exception. The company had yet to seek FCC approval for the project, though, which is what Seeking Alpha reported today.
So what does this plan to offer space internet with a weird name actually involve? Amazon explained in April:
“Project Kuiper is a new initiative to launch a constellation of low Earth orbit satellites that will provide low-latency, high-speed broadband connectivity to unserved and underserved communities around the world. This is a long-term project that envisions serving tens of millions of people who lack basic access to broadband internet. We look forward to partnering on this initiative with companies that share this common vision.”
Expanding Internet access has become something of an obsession among tech companies. Google offers fiber Internet services as well as its own cellular network, Facebook scrapped plans to offer internet access via drones in June 2018, and Amazon isn’t the only company hoping to use low Earth orbit satellites to allow previously unconnected people to finally join the rest of the world online. It’s a bit of a trend.
Slack is one of many Silicon Valley unicorns going public this year, but it’s the only one that has admitted it is at risk for nation-state attacks. In the S-1 forms filed with the Securities and Exchange Commission, Uber, Lyft, Pinterest and Snapchat addressed threats that could lower the price of their stock — including malware, phishing, disgruntled employees and denial-of-service attacks — but only Slack explicitly highlighted “nation-states” as a potential threat.
According to Slack’s S-1 form, the company faces threats from “sophisticated organized crime, nation-state, and nation-state supported actors.” The company acknowledges that its security measures “may not be sufficient to protect Slack and our internal systems and networks against certain attacks,” and correctly assesses that it is “virtually impossible” for the company to completely eliminate the risk of a nation-state attack.
But it is possible for Slack to minimize that risk. Or it would be, if Slack gave all its users the ability to decide which information Slack should keep and which information it should delete.
Right now, Slack stores everything you do on its platform by default — your username and password, every message you’ve sent, every lunch you’ve planned and every confidential decision you’ve made. That data is not end-to-end encrypted, which means Slack can read it, law enforcement can request it, and hackers — including the nation-state actors highlighted in Slack’s S-1 — can break in and steal it.
Slack is widely marketed for and used in business settings, so the company’s servers hold a treasure trove of valuable, proprietary information. Slack’s paying enterprise customers do have a way to mitigate their security risk — they can change their settings to set shorter retention periods and automatically delete old messages — but it’s not just big companies that are at risk.
Slack’s users include community organizers, political organizations, journalists and unions. At the Electronic Frontier Foundation, where I work, we collaborate with activists, reporters and others on their digital privacy and security, and we’ve noticed these users increasingly gravitating toward Slack’s free product.
And that’s what makes the company’s warning to investors particularly alarming: Free customer accounts don’t allow for any changes to data retention. Instead, Slack retains all of your messages but makes only the most recent 10,000 visible to you. Everything beyond that 10,000-message limit remains on Slack’s servers. So while those messages might seem out of sight and out of mind, they are all still indefinitely available to Slack, law enforcement and third-party hackers.
There’s an interesting and troubling attack happening to some people involved in the OpenPGP community that makes their certificates unusable and can essentially break the OpenPGP implementation of anyone who tries to import one of the certificates.
The attack is quite simple and doesn’t exploit any technical vulnerabilities in the OpenPGP software, but instead takes advantage of one of the inherent properties of the keyserver network that’s used to distribute certificates. Keyservers are designed to allow people to discover the public certificates of other people with them they want to communicate over a secure channel. One of the properties of the network is that anyone who has looked at a certificate and verified that it belongs to another specific person can add a signature, or attestation, to the certificate. That signature basically serves as the public stamp of approval from one user to another.
In general, people add signatures to someone’s certificate in order to give other users more confidence that the certificate is actually owned and controlled by the person who claims to own it. However, the OpenPGP specification doesn’t have any upper limit on the number of signatures that a certificate can have, so any user or group of users can add signatures to a given certificate ad infinitum. That wouldn’t necessarily be a problem, except for the fact that GnuPG, one of the more popular packages that implements the OpenPGP specification, doesn’t handle certificates with extremely large numbers of signatures very well. In fact, GnuPG will essentially stop working when it attempts to import one of those certificates.
Last week, two people involved in the OpenPGP community discovered that their public certificates had been spammed with tens of thousands of signatures–one has nearly 150,000–in an apparent effort to render them useless. The attack targeted Robert J. Hansen and Daniel Kahn Gillmor, but the root problem may end up affecting many other people, too.
“This attack exploited a defect in the OpenPGP protocol itself in order to ‘poison’ rjh and dkg’s OpenPGP certificates. Anyone who attempts to import a poisoned certificate into a vulnerable OpenPGP installation will very likely break their installation in hard-to-debug ways. Poisoned certificates are already on the SKS keyserver network. There is no reason to believe the attacker will stop at just poisoning two certificates. Further, given the ease of the attack and the highly publicized success of the attack, it is prudent to believe other certificates will soon be poisoned,” Hansen wrote in a post explaining the incident.
A former patient at the University of Chicago Medical Center is suing UChicago, the medical center, and Google, accusing them of violating the privacy rights of patients at UChicago Medicine through the sharing of patient records containing identifiable information.
The class action lawsuit, filed by Matt Dinerstein in the Northern District of Illinois on Wednesday, claims that UChicago violated federal law protecting patient privacy in its partnership with Google to share records of patients from 2009 to 2016. It also claims that Google will be able to use the patient data to develop highly lucrative health-care technologies.
The suit charges that the University breached contracts between UChicago and its patients by allegedly falsely claiming to patients that it would be protecting their medical records. It also charges UChicago for violating an Illinois law dictating that companies cannot engage in deceptive practices with clients.
UChicago spokesperson Jeremy Manier said in a statement e-mailed to The Maroon, “The claims in this lawsuit are without merit. The University of Chicago Medical Center has complied with the laws and regulations applicable to patient privacy.”
“The Medical Center entered into a research partnership with Google as part of the Medical Center’s continuing efforts to improve the lives of its patients,” the statement continues. “That research partnership was appropriate and legal and the claims asserted in this case are baseless and a disservice to the Medical Center’s fundamental mission of improving the lives of its patients. The University and the Medical Center will vigorously defend this action in court.”
A Google spokesperson said in a statement e-mailed to The Maroon, “We believe our healthcare research could help save lives in the future, which is why we take privacy seriously and follow all relevant rules and regulations in our handling of health data.”
UChicago announced in 2017 that it would begin sharing electronic medical records with Google in a partnership to develop machine-learning techniques that could improve the quality of health services. At the time, UChicago said that Google would ensure that “patient data is kept private and secure,” and would be “strictly following HIPAA privacy rule.”
HIPAA, the Health Insurance Portability and Accountability Act, is a federal law mandating that shared patient information must be “de-identified”—stripped of any identifying information such as addresses and photos—to protect patients’ privacy.
The complaint accuses UChicago of making insufficient efforts to scrub patient-identifying data before handing over documents.
Though UChicago and Google claim to have de-identified patients, UChicago’s inclusion of timestamps indicating when patients checked in and out of the medical center makes the records identifiable and thereby violate HIPAA, the suit alleges. It cites an article published last year by Google and researchers from collaborating universities that says, “All EHRs [medical records] were de-identified, except that dates of service were maintained in the UCM [UChicago Medicine] dataset.”
Google’s potential capability to “re-identify” patients with its advanced data mining technologies indicates that “these records were not sufficiently anonymized and put the patients’ privacy at grave risk,” the complaint claims. It notes Google’s possession of geolocation information that can “pinpoint and match exactly when certain people entered and exited the University’s hospital.”
UChicago is not the only university to share health records with Google; other universities with similar partnerships include Stanford University and the University of California, San Francisco, according to the article published by Google and collaborating researchers. Wednesday’s lawsuit rests on the fact that UChicago’s records, as obtained by Google, include timestamps of patient records.
The suit also argues that Google’s acquisition of a British startup called DeepMind in 2014 has allowed Google to possess robust machine-learning technologies that would allow Google to connect medical records to Google users’ data.
DeepMind and Google obtained health records from the British Royal Free Hospital in 2015. The project was accused by a British watchdog organization for not complying with data protection law, the suit claims.
Last week, the LightSail 2 officially made its first contact with Earth. The solar-powered spacecraft will be sailing around Earth’s orbit for the next year, all part of a mission to prove that solar sailing is a viable mode of space exploration.
If successful, the hope is that solar sailing could be used in other spacecraft going forward, something that could allow us to explore further in space at a lower cost than is currently possible.
It’s a pretty cool idea and one that could ultimately have an impact on how we explore space in the future. And you can track it in real time from your computer whenever you want.
Now that the LightSail 2 is communicating with Earth, the folks from The Planetary Society that put the vessel in space are making some of its stats available through an online dashboard that’s free for anyone to look at.
Image: Planetary Society
With it, you can see things like how long the LightSail 2 has been on its mission, whether or not its sail is stowed, and what the internal temperature of the spacecraft is right now. You can also see where the vessel is right now and what path it’s expected to take, in case you want to try and snag a look as it passes overhead.
Image: Planetary Society
If you’re a space fan, it’s a pretty neat thing to check out, especially for that fly-by potential once the sail is deployed. And if that’s not enough, you can also track the LightSail 2’s progress in narrative form on The Planetary Society’s blog.
The current unrest concerns a proposed change to Hong Kong’s extradition laws that would allow island fugitives to be transferred to Taiwan, Macau, and mainland China. The proposal sparked mass outrage, as many Hongkongers saw it as little more but a new way for the People’s Republic of China to erode the legal sovereignty of Hong Kong.
[…]
So tens of thousands of Hongkongers took to the streets to protest what they saw as creeping tyranny from a powerful threat. But they did it in a very particular way.
In Hong Kong, most people use a contactless smart card called an “Octopus card” to pay for everything from transit, to parking, and even retail purchases. It’s pretty handy: Just wave your tentacular card over the sensor and make your way to the platform.
But no one used their Octopus card to get around Hong Kong during the protests. The risk was that a government could view the central database of Octopus transactions to unmask these democratic ne’er-do-wells. Traveling downtown during the height of the protests? You could get put on a list, even if you just happened to be in the area.
So the savvy subversives turned to cash instead. Normally, the lines for the single-ticket machines that accept cash are populated only by a few confused tourists, while locals whiz through the turnstiles with their fintech wizardry.
What could protestors do in a cashless world? Maybe they would have to grit their teeth and hope for the best. But relying on the benevolence or incompetence of a motivated entity like China is not a great plan. Or perhaps public transit would be off-limits altogether. This could limit the protests to fit people within walking or biking distance, or people who have access to a private car—a rarity in expensive dense cities.
If some of our eggheads had their way, the protestors would have had no choice. A chorus of commentators call for an end to cash, whether because it frustrates central bank schemes, fuels black and grey markets, or is simply inefficient. We have plenty of newfangled payment options, they say. Why should modern first world economies hew to such primordial human institutions?
The answer is that there is simply no substitute for the privacy that cash, including digitized versions like cryptocurrencies, provide. Even if all of the alleged downsides that critics bemoan were true, cash would still be worth defending and celebrating for its core privacy-preserving functions. As Jerry Brito of Coin Center points out, cash protects our autonomy and indeed our human dignity.
Of course, Western offerings like Apple Pay and Venmo also maintain user databases that can be mined. Users may feel protected by the legal limits that countries like the United States place on what consumer data the government can extract from private business. But as research by Van Valkenburgh points out, US anti-money laundering laws afford less Fourth Amendment protection than you might expect. Besides, we still need to trust government and businesses to do the right thing. As the Edward Snowden revelations proved, this trust can be misplaced.
Hong Kong is about as first world as you can get. Yet even in such a developed economy, power’s jealous hold is but an ill-worded reform away. We should not allow today’s relative freedom to obscure the threat that a cashless world poses to our sovereignty. Not only can “it happen here,” for some of your fellow citizens, it might already have.
Spotted by the always excellent Windows Latest, Microsoft has told tens of millions of Windows 10 users that the latest KB4501375 update may break the platform’s Remote Access Connection Manager (RASMAN). And this can have serious repercussions.
The big one is VPNs. RASMAN handles how Windows 10 connects to the internet and it is a core background task for VPN services to function normally. Given the astonishing growth in VPN usage for everything from online privacy and important work tasks to unlocking Netflix and YouTube libraries, this has the potential to impact heavily on how you use your computer.
Interestingly, in detailing the issue Microsoft states that it only affects Windows 10 1903 – the latest version of the platform. The problem is Windows 10 1903 accounts for a conservative total of at least 50M users.
Why conservative? Because Microsoft states Windows 10 has been installed on 800M computers worldwide, but that figure is four months old. Meanwhile, the ever-reliable AdDuplex reports Windows 10 1903 accounted for 6.3% of all Windows 10 computers in June (50.4M), but that percentage was achieved in just over a month and their report is 10 days old. Microsoft has listed a complex workaround, but no timeframe has been announced for an actual fix.
In the meantime, Microsoft is stepping up its attempts to push Windows 7 users to Windows 10. Those users must be looking at Windows 10 right now and thinking they will resist to the very end.
The remarkable ability of plants to respond to their environment has led some scientists to believe it’s a sign of conscious awareness. A new opinion paper argues against this position, saying plants “neither possess nor require consciousness.”
To explain these apparent behaviors, a subset of scientists known as plant neurobiologists has argued that plants possess a form of consciousness. Most notably, evolutionary ecologist Monica Gagliano has performed experiments that allegedly hint at capacities such as habituation (learning from experience) and classical conditioning (like Pavlov’s salivating dogs). In these experiments, plants apparently “learned” to stop curling their leaves after being dropped repeatedly or to spread their leaves in anticipation of a light source. Armed with this experimental evidence, Gagliano and others have claimed, quite controversially, that because plants can learn and exhibit other forms of intelligence, they must be conscious.
Nonsense, argues a new paper published today in Trends in Plant Science. The lead author of the new paper, biologist Lincoln Taiz from the University of California at Santa Cruz, isn’t denying plant intelligence, but makes a strong case against their being conscious.
Retail Industry Leaders Association (RILA), [is] a trade group representing the likes of Walmart, Target, Dollar General, Coca Cola and other world-swallowing corporations
[…]
RILA, as it turns out, is feeling just as freaked out by the dominance of a handful of tech giants as the rest of us, and in a letter today to the Federal Trade Commission—which, along with the Justice Department, has called dibs on potential antitrust investigations into tech firms including Amazon, Google, Facebook, and Apple—it fired its first shot in the ongoing war to break up Amazon, Google, and the rest.
While activists and, increasingly, politicians have taken up the cause of curbing the unimaginable power these companies have amassed and exerted with little oversight, this letter is tantamount to 200 of the biggest U.S. companies declaring open season on their ecommerce competitors. Importantly, RILA also represents a handful of ostensible Big Tech allies, with T-Mobile listed as a member, and Accenture and IBM executives sitting on RILA’s board.
The first major complaint RILA lodges is with search, which allows these companies—namely, Google and Amazon—to dictate what information buyers get before they even make a purchase (emphasis ours throughout):
While classical antitrust analysis assumes that customer behavior is driven by prices, the reality is that consumers can only make price-driven decisions if they have accurate, trustworthy, and timely access to information about prices […] It should thus be quite concerning to the Commission that Amazon and Google control the majority of all internet product search, and can very easily affect whether and how price information actually reaches consumers.
This isn’t a theoretical complaint either. Amazon already uses design flags like “Amazon’s choice” to differentiate certain products, many of which were found to be unreliable. Researchers from Harvard and the University of Oklahoma have also suggested that “Amazon is more likely to target successful product spaces” and “less likely to enter product spaces that require greater seller efforts to grow,” suggesting it uses data harvested via its role as a platform to inform its decisions as a seller of a growing number of private-label products.
Of course, it wouldn’t be an antitrust argument without some mention of data privacy, which is another RILA area of complaint:
[B]ecause nearly two-thirds of consumers search directly on Amazon when looking for a consumer product, it has a massive amount of data on consumer shopping needs and behaviors. According to its Privacy Notice, Amazon can and has shared consumer data with many unaffiliated companies, including the largest wireless carriers. Moreover, Amazon does not offer the consumer a choice to opt-out of this data sharing. As a result, consumers are asked to make tradeoffs that they could not anticipate or understand—provide their personal data to Amazon […] or not be allowed to shop on the most widely used platform in the world.
Lastly, RILA hits on something that, at least in the day-to-day reporting of growing anti-monopoly sentiment against tech platforms, tends to get lost: quality. RILA even earmarks this as an issue that is “frequently overlooked in favor of a focus on price.” Given that a huge swath of internet services are “free” or near-free (in exchange for your valuable data, an eyeful of ads, or both, of course), the antitrust argument that monopoly power online hurts consumers can be hard to prove in a monetary sense. Still, RILA argues:
It is worth observing how the quality of [Google, Facebook, and Amazon] have degraded as these companies shifted from fierce competitors to dominant monopolists. Google search used to be elegant and free from advertising […] Facebook co-founder Chris Hughes recently observed that Facebook’s initial innovations—including its “simple, beautiful interface”—were forged by the pressure of competition [but] has given way to advertising and interfaces that make it difficult for users to avoid content they do not wish to see.
Obviously, RILA’s place in this fight is self-serving: If anyone was hit hardest by ecommerce, it was traditional retail. Still, where reforming antitrust law for the digital age is concerned, RILA is largely right, even if it feels somewhat icky to be agreeing with Walmart about anything.
3 July 2019: The CMA has launched a market study into online platforms and the digital advertising market in the UK. We are assessing three broad potential sources of harm to consumers in connection with the market for digital advertising:
to what extent online platforms have market power in user-facing markets, and what impact this has on consumers
whether consumers are able and willing to control how data about them is used and collected by online platforms
whether competition in the digital advertising market may be distorted by any market power held by platforms
We are inviting comments by 30 July 2019 on the issues raised in the statement of scope, including from interested parties such as online platforms, advertisers, publishers, intermediaries within the ad tech stack, representative professional bodies, government and consumer groups.
Next time you use Amazon Alexa to message a friend or order a pizza, know that the record could be stored indefinitely, even if you ask to delete it.
In May, Delaware Senator Chris Coons sent Amazon CEO Jeff Bezos a letter asking why Amazon keeps transcripts of voices captured by Echo devices, citing privacy concerns over the practice. He was prompted by reports that Amazon stores the text.
“Unfortunately, recent reporting suggests that Amazon’s customers may not have as much control over their privacy as Amazon had indicated,” Coons wrote in the letter. “While I am encouraged that Amazon allows users to delete audio recordings linked to their accounts, I am very concerned by reports that suggest that text transcriptions of these audio records are preserved indefinitely on Amazon’s servers, and users are not given the option to delete these text transcripts.”
CNET first reported that Amazon’s vice president of public policy, Brian Huseman, responded to the senator on June 28, informing him that Amazon keeps the transcripts until users manually delete the information. The letter states that Amazon works “to ensure those transcripts do not remain in any of Alexa’s other storage systems.”
However, there are some Alexa-captured conversations that Amazon retains, regardless of customers’ requests to delete the recordings and transcripts, according to the letter.
As an example of records that Amazon may choose to keep despite deletion requests, Huseman mentioned instances when customers use Alexa to subscribe to Amazon’s music or delivery service, request a rideshare, order pizza, buy media, set alarms, schedule calendar events, or message friends. Huseman writes that it keeps these recordings because “customers would not want or expect deletion of the voice recording to delete the underlying data or prevent Alexa from performing the requested task.”
The letter says Amazon generally stores recordings and transcripts so users can understand what Alexa “thought it heard” and to train its machine learning systems to better understand the variations of speech “based on region, dialect, context, environment, and the individual speaker, including their age.” Such transcripts are not anonymized, according to the letter, though Huseman told Coons in his letter, “When a customer deletes a voice recording, we delete the transcripts associated with the customer’s account of both of the customer’s request and Alexa’s response.”
Amazon declined to provide a comment to Gizmodo beyond what was included in Huseman’s letter.
In his public response to the letter, Coons expressed concern that it shed light on the ways Amazon is keeping some recordings.
“Amazon’s response leaves open the possibility that transcripts of user voice interactions with Alexa are not deleted from all of Amazon’s servers, even after a user has deleted a recording of his or her voice,” Coons said. “What’s more, the extent to which this data is shared with third parties, and how those third parties use and control that information, is still unclear.”
Facebook resolves day-long outages across Instagram, WhatsApp, and Messenger
Facebook had problems loading images, videos, and other
data across its apps today, leaving some people unable to load photos in
the Facebook News Feed, view stories on Instagram, or send messages in
WhatsApp. Facebook said earlier today it was aware of the issues and was
“working to get things back to normal as quickly as possible.” It
blamed the outage on an error that was triggered during a “routine
maintenance operation.”
As of 7:49PM ET, Facebook posted a message to its
official Twitter account saying the “issue has since been resolved and
we should be back at 100 percent for everyone. We’re sorry for any
inconvenience.” Instagram similarly said its issues were more or less
resolved, too.
Earlier today, some people and businesses
experienced trouble uploading or sending images, videos and other files
on our apps. The issue has since been resolved and we should be back at
100% for everyone. We’re sorry for any inconvenience.— Facebook Business (@FBBusiness) July 3, 2019
We’re back! The issue has been resolved and we should be back at 100% for everyone. We’re sorry for any inconvenience. pic.twitter.com/yKKtHfCYMA— Instagram (@instagram) July 3, 2019
The issues started around 8AM ET and began slowly
clearing up after a couple hours, according to DownDetector, which
monitors website and app issues. The errors aren’t affecting all images;
many pictures on Facebook and Instagram still load, but others are
appearing blank. DownDetector has also received reports of people being
unable to load messages in Facebook Messenger.
The outage persisted through mid-day, with Facebook releasing a second statement, where it apologized “for any inconvenience.” Facebook’s platform status website still lists a “partial outage,” with a note saying that the company is “working on a fix that will go out shortly.”
Apps and websites are always going to experience
occasional disruptions due to the complexity of services they’re
offering. But even when they’re brief, they can become a real problem
due to the huge number of users many of these services have. A Facebook
outage affects a suite of popular apps, and those apps collectively have
billions of users who rely on them. That’s a big deal when those
services have become critical for businessand communications, and every hour they’re offline or acting strange can mean real inconveniences or lost money.
We’re aware that some people are having trouble
uploading or sending images, videos and other files on our apps. We’re
sorry for the trouble and are working to get things back to normal as
quickly as possible. #facebookdown— Facebook (@facebook) July 3, 2019
The issue caused some images and features to break across all of Facebook’s apps
Well, folks, Facebook and its “family of apps” has experienced yet
another crash. A nice respite moving into the long holiday weekend if
you ask me.
Problems that appear to have started early Wednesday
morning were still being reported as of the afternoon, with Instagram,
Facebook, WhatsApp, Oculus, and Messenger all experiencing issues.
According to DownDetector, issues first started cropping up on Facebook at around 8am ET.
“We’re
aware that some people are having trouble uploading or sending images,
videos and other files on our apps. We’re sorry for the trouble and are
working to get things back to normal as quickly as possible,” Facebook tweeted just after noon on Wednesday. A similar statement was shared from Instagram’s Twitter account.
You know what we definitely need more of on social media? Influencers and ads. And lucky for us,…Read more
Oculus, Facebook’s VR property, separately tweeted that it was experiencing “issues around downloading software.”
Facebook’s
crash was still well underway as of 1pm ET on Wednesday, primarily
affecting images. Where users typically saw uploaded images, such as
their profile pictures or in their photo albums, they instead saw a
string of terms describing Facebook’s interpretation of the image. Like
this:
TechCrunch’s
Zack Whittaker noted on Twitter that all of those image tags you may
have seen were Facebook’s machine learning at work.
This week’s crash is just the latest in what has become a near semi-frequent occurrence of outages. The first occurred back in March in an incident that Facebook later blamed on “a server configuration change.” Facebook and its subsidiaries went down again about a month later, though the previous incident was much worse, with millions of reports on DownDetector.
Two weeks ago, Instagram was bricked
and experienced ongoing issues with refreshing feeds, loading profiles,
and liking images. While the feed refresh issue was quickly patched, it
was hours before the company confirmed that Instagram had been fully
restored.
We’ve reached out to Facebook for more information about the issues and will update this post if we hear back.
Code crash? Russian hackers? Nope. Good ol’ broken fiber cables borked Google Cloud’s networking today
Fiber-optic cables linking Google Cloud servers in its us-east1
region physically broke today, slowing down or effectively cutting off
connectivity with the outside world.
For at least the past nine hours, and counting,
netizens and applications have struggled to connect to systems and
services hosted in the region, located on America’s East Coast.
Developers and system admins have been forced to migrate workloads to
other regions, or redirect traffic, in order to keep apps and websites
ticking over amid mitigations deployed by the Silicon Valley giant.
By 0900 PDT, Google revealed the extent of the
blunder: its cloud platform had “lost multiple independent fiber links
within us-east1 zone.” The fiber provider, we’re told, “has been
notified and are currently investigating the issue. In order to restore
service, we have reduced our network usage and prioritised customer
workloads.”
By that, we understand, Google means it redirected
traffic destined for its Google.com services hosted in the data center
region, to other locations, allowing the remaining connectivity to carry
customer packets.
By midday, Pacific Time, Google updated its status
pages to note: “Mitigation work is currently underway by our engineering
team to address the issue with Google Cloud Networking and Load
Balancing in us-east1. The rate of errors is decreasing, however some
users may still notice elevated latency.”
However, at time of writing, the physically damaged
cabling is not yet fully repaired, and US-east1 networking is thus still
knackered. In fact, repairs could take as much as 24 hours to complete.
The latest update, posted 1600 PDT, reads as follows:
The disruptions with Google Cloud Networking and Load Balancing have
been root caused to physical damage to multiple concurrent fiber bundles
serving network paths in us-east1, and we expect a full resolution
within the next 24 hours.
In the meantime, we are electively rerouting traffic to ensure that
customers’ services will continue to operate reliably until the affected
fiber paths are repaired. Some customers may observe elevated latency
during this period.
Customers using Google Cloud’s Load Balancing service
will automatically fall over to other regions, if configured,
minimizing impact on their workloads, it is claimed. They can also migrate to, say US-east4, though they may have to rejig their code and scripts to reference the new region.
The Register asked Google for more details
about the damaged fiber, such as how it happened. A spokesperson told us
exactly what was already on the aforequoted status pages.
Meanwhile, a Google Cloud subscriber wrote a little ditty about the outage to the tune of Pink Floyd’s Another Brick in the Wall. It starts: “We don’t need no cloud computing…” ®
This major Cloudflare internet routing blunder took A WEEK to fix. Why so long? It was IPv6 – and no one really noticed
Last week, an internet routing screw-up propagated by Verizon for three hours sparked havoc online, leading to significant press attention and industry calls for greater network security.
A few weeks before that, another packet routing blunder,
this time pushed by China Telecom, lasted two hours, caused significant
disruption in Europe and prompted some to wonder whether Beijing’s
spies were abusing the internet’s trust-based structure to carry out
surveillance.
In both cases, internet engineers were shocked at how
long it took to fix traffic routing errors that normally only last
minutes or even seconds. Well, that was nothing compared to what
happened this week.
Cloudflare’s director of network engineering Jerome
Fleury has revealed that the routing for a big block of IP addresses was
wrongly announced for an ENTIRE WEEK and, just as amazingly, the
company that caused it didn’t notice until the major blunder was pointed
out by another engineer at Cloudflare. (This cock-up is completely
separate to today’s Cloudflare outage.)
How is it even possible for network routes to remain completely wrong for several days? Because, folks, it was on IPv6.
“So Airtel AS9498 announced the entire IPv6 block
2400::/12 for a week and no-one notices until Tom Strickx finds out and
they confirm it was a typo of /127,” Fleury tweeted over the weekend, complete with graphic showing the massive routing error.
That /12 represents 83 decillion IP addresses, or
four quadrillion /64 networks. The /127 would be 2. Just 2 IP addresses.
Slight difference. And while this demonstrates the expansiveness of
IPv6’s address space, and perhaps even its robustness seeing as nothing
seems to have actually broken during the routing screw-up, it also hints
at just how sparse IPv6 is right now.
To be fair to Airtel, it often takes someone else to
notice a network route error – typically caused by simple typos like
failing to add a “7” – because the organization that messes up the
tables tends not to see or feel the impact directly.
But if ever there was a symbol of how miserably the
transition from IPv4 to IPv6 is going, it’s in the fact that a fat IPv6
routing error went completely unnoticed for a week while an IPv4 error will usually result in phone calls, emails, and outcry on social media within minutes.
And sure, IPv4 space is much, much more dense than IPv6 so obviously people will spot errors much faster. But no one at all noticed the advertisement of a /12 for days? That may not bode well for the future, even though, yes, this particular /127 typo had no direct impact.
I got 502 problems, and Cloudflare sure is one: Outage interrupts your El Reg-reading pleasure for almost half an hour
Updated Cloudflare, the outfit noted
for the slogan “helping build a better internet”, had another wobble
today as “network performance issues” rendered websites around the globe
inaccessible.
The US tech biz updated its status page at 1352 UTC
to indicate that it was aware of issues, but things began tottering
quite a bit earlier. Since Cloudflare handles services used by a good
portion of the world’s websites, such as El Reg, including
content delivery, DNS and DDoS protection, when it sneezes, a chunk of
the internet has to go and have a bit of a lie down. That means netizens
were unable to access many top sites globally.
A stumble last week was attributed to the antics of Verizon by CTO John Graham-Cumming. As for today’s shenanigans? We contacted the company, but they’ve yet to give us an explanation.
While Cloudflare implemented a fix by 1415 UTC and declared things resolved by 1457 UTC, a good portion of internet users noticed things had gone very south for many, many sites.
The company’s CEO took to Twitter to proffer an
explanation for why things had fallen over, fingering a colossal spike
in CPU usage as the cause while gently nudging the more wild conspiracy
theories away from the whole DDoS thing.
However, the outage was a salutary reminder of the
fragility of the internet as even Firefox fans found their beloved
browser unable to resolve URLs.
Ever keen to share in the ups and downs of life, even Cloudflare’s site also reported the dread 502 error.
As with the last incident, users who endured the
less-than-an-hour of disconnection would do well to remember that the
internet is a brittle thing. And Cloudflare would do well to remember
that its customers will be pondering if maybe they depend on its
services just a little too much.
Updated to add at 1702 BST
Following publication of this article, Cloudflare released a blog post
stating the “CPU spike was caused by a bad software deploy that was
rolled back. Once rolled back the service returned to normal operation
and all domains using Cloudflare returned to normal traffic levels.”
Naturally it then added….
“We are incredibly sorry that this incident occurred. Internal teams are meeting as I write performing a full post-mortem to understand how this occurred and how we prevent this from ever occurring again.” ®
Cloudflare gave everyone a 30-minute break from a chunk of the internet yesterday: Here’s how they did it
Internet services outfit Cloudflare took careful aim and unloaded
both barrels at its feet yesterday, taking out a large chunk of the
internet as it did so.
In an impressive act of openness, the company posted a distressingly detailed post-mortem on the cockwomblery that led to the outage. The Register also spoke to a weary John Graham-Cumming, CTO of the embattled company, to understand how it all went down.
This time it wasn’t Verizon wot dunnit; Cloudflare engineered this outage all by itself.
In a nutshell, what happened was that Cloudflare
deployed some rules to its Web Application Firewall (WAF). The gang
deploys these rules to servers in a test mode – the rule gets fired but
doesn’t take any action – in order to measure what happens when real
customer traffic runs through it.
We’d contend that an isolated test environment into
which one could direct traffic would make sense, but Graham-Cumming told
us: “We do this stuff all the time. We have a sequence of ways in which
we deploy stuff. In this case, it didn’t happen.”
In a frank admission that should send all DevOps
enthusiasts scurrying to look at their pipelines, Graham-Cumming told
us: “We’re really working on understanding how the automated test suite
which runs internally didn’t pick up the fact that this was going to
blow up our service.”
The CTO elaborated: “We push something out, it gets
approved by a human, and then it goes through a testing procedure, and
then it gets pushed out to the world. And somehow in that testing
procedure, we didn’t spot that this was going to blow things up.”
“And that didn’t happen in this instance. This should have been caught easily.”
Alas, two things went wrong. Firstly, one of the
rules (designed to block nefarious inline JavaScript) contained a
regular expression that would send CPU usage sky high. Secondly, the new
rules were accidentally deployed globally in one go.
The result? “One of these rules caused the CPU spike to 100 per cent, on all of our machines.” And because Cloudflare’s products are distributed over all its servers, every service was starved of CPU while the offending regular expression did its thing.
In order to create what it calls “the world’s lightest gaming mouse,” the engineers at peripheral maker Glorious PC Gaming Race took a mouse and put holes all in it. The result is the Model O, a very good gaming mouse that weighs only 67 grams and may trigger trypophobia.
“You’ll barely feel the holes,” reads the copy on the Model O’s product page, answering the question I imagine most people have when looking at the honeycombed plastic shell. I’ve used the ultra-light accessory for a couple weeks now, and the product page is correct. It feels slightly bumpy under the palm.
Only when I look directly at the Model O do I feel mildly disturbed by the pattern of holes covering the top and its underside. The effect is less jarring when the RGB lighting is cycling. While I’m actively using the mouse, my giant hands cover it completely. Glorious PC Gaming Race says the holes allow for better airflow, keeping hands cool, but my massive paws negate that benefit. I worry about dirt getting in the holes, but that’s nothing I can’t avoid by not being a total slob. Perhaps it’s time.
The Model O slides over my mouse pad effortlessly thanks to its ridiculously low weight and the rounded plastic feet, which Glorious PC Gaming Race calls “G-Skates.” I particularly enjoy the mouse’s cable, a proprietary braided affair that feels like a normal thin wire wrapped in a shoelace. It doesn’t tangle, which is an issue with many mice and one of the main reasons I prefer a stationary trackball.
Beneath the unique design and proprietary bits, the Model O is a very nice six-button gaming mouse. It’s got a Pixart sensor that can be adjusted as sensitive as 12,000 DPI (dots per inch), with more sensible presets of 400, 800, 1,600, and 3,200 cyclable via a button on the bottom of the unit (software is required to go higher). It’s fast and responsive.
Glorious PC Gaming Race Model O Specs
Sensor: Pixart PMW-3360 Sensor
Switch Type (Main): Omron Mechanical Rated For 20 Million Clicks
Number of Buttons: 6
Max Tracking Speed: 250+ IPS
Weight: 67grams (Matte) and 68 grams (Glossy)
Acceleration: 50G
Max DPI: 12,000
Polling Rate: 1000hz (1ms)
Lift off Distance: ~0.7mm
Price: $50 Matte, $60 Glossy.
Note that the Model O comes in four styles: black or white matte finish and black or white glossy. The glossy versions cost $10 more than the $50 matte versions and weigh 68 grams instead of 67. In other words, the glossy versions are not the “world’s lightest gaming mouse” and should be exiled.
The Glorious PC Gaming Race Model O is the lightest gaming mouse I’ve used. I’m not sure I’m the type of hardcore mouse user that would benefit from the reduced weight. In fact, many of the gaming mice I’ve evaluated over the past several years have come packaged with weights to make them heavier. If you prefer a more lightweight pointing device and don’t mind all the holes, the Model O could be for you. And if not, you can probably fill it with clay or something to weigh it down.
YouTube, under fire since inception for building a business on other people’s copyrights and in recent years for its vacillating policies on irredeemable content, recently decided it no longer wants to host instructional hacking videos.
The written policy first appears in the Internet Wayback Machine’s archive of web history in an April 5, 2019 snapshot. It forbids: “Instructional hacking and phishing: Showing users how to bypass secure computer systems or steal user credentials and personal data.”
Lack of clarity about the permissibility of cybersecurity-related content has been an issue for years. In the past, hacking videos in years past could be removed if enough viewers submitted reports objecting to them or if moderators found the videos violated other articulated policies.
Now that there’s a written rule, there’s renewed concern about how the policy is being applied.
Kody Kinzie, a security researcher and educator who posts hacking videos to YouTube’s Null Byte channel, on Tuesday said a video created for the US July 4th holiday to demonstrate launching fireworks over Wi-Fi couldn’t be uploaded because of the rule.
“I’m worried for everyone that teaches about infosec and tries to fill in the gaps for people who are learning,” he said via Twitter. “It is hard, often boring, and expensive to learn cybersecurity.”
In an email to The Register, Kinzie clarified that YouTube had problems with three previous videos, which got flagged and are either in the process of review or have already been appealed and restored. They involved Wi-Fi hacking. One of the Wi-Fi hacking videos got a strike on Tuesday and that disabled uploading for the account, preventing the fireworks video from going up.
The Register asked Google’s YouTube for comment but we’ve not heard back.
Security professionals find the policy questionable. “Very simply, hacking is not a derogatory term and shouldn’t be used in a policy about what content is acceptable,” said Tim Erlin, VP of product management and strategy at cybersecurity biz Tripwire, in an email to The Register.
“Google’s intention here might be laudable, but the result is likely to stifle valuable information sharing in the information security community.”
Spotify has changed the way artists can upload music, now prohibiting individual musicians from putting their songs on the streaming service directly.
The new move requires a third party to be involved in the business of uploads.
The company announced the change on Monday, saying it will close the beta program and stop accepting direct uploads by the end of July.
“The most impactful way we can improve the experience of delivering music to Spotify for as many artists and labels as possible is to lean into the great work our distribution partners are already doing to serve the artist community,” Spotify said in a statement on its blog. “Over the past year, we’ve vastly improved our work with distribution partners to ensure metadata quality, protect artists from infringement, provide their users with instant access to Spotify for Artists, and more.”
“The best way for us to serve artists and labels is to focus our resources on developing tools in areas where Spotify can uniquely benefit them — like Spotify for Artists (which more than 300,000 creators use to gain new insight into their audience) and our playlist submission tool (which more than 36,000 artists have used to get playlisted for the very first time since it launched a year ago). We have a lot more planned here in the coming months,” the post continued.
The direct upload function began last September, allowing independent artists to utilize the streaming site without distribution methods.
Smaller artists will now need to return to sites like Bandcamp, SoundCloud and others to upload their material.
Many people, especially artists, were upset about the decision. You can see what they had to say on Twitter below.
spotify discontinuing their direct upload beta while removing any song uploaded through it shows again how spotify does not give a single fuck about artists
for me the biggest takeaway from Spotify closing its direct upload beta is that the company isn’t actually as globally influential as it thought, with respect to convincing artists that uploading *only* to Spotify was anywhere near enough to sustain their careers + satisfy fans.
@Spotify sucks. Y’all making artist go through third party sites to upload their music and pay on top of that. As if the third party sites aren’t going to charge as well
Spotify turning around and leaving distributors to do their job, by pulling the plug on their beta upload tool is music to my ears, but we saw it coming
Pre-saving an upcoming release from your favorite artists on Spotify could be causing you to share more personal data than you realize.
In a recent report from Billboard, it was revealed that Spotify users were giving a band’s label data use permissions that were much broader than typical permissions.
When a user pre-saves a track, it adds it to the user’s library the moment it comes out. In order to do this, Spotify users have to click through and approve certain permissions.
These permissions give the label more access to your account than Spotify normally gives. It allows them to track listening habits, change the artists they follow and potentially control their streaming remotely.
The authority on personal data has reprimanded the ING Bank over plans to use payment data for advertising. The authority has told other banks to examine their policies for direct marketing. ING Bank recently changed their privacy statement, stating that the bank will use payment data for direct marketing offers. As an example they said being able to offer specific product offers after child support payments had come in. Many ING customers caught this and emailed and called the authority about this angrily.
This is the second time the ING has tried this: in 2014 they tried to do this, but then also sharing the payment data with third parties.
In the meantime, the Dutch government is trying to find a way to prohibit cash payments of over EUR 3000,- and insiduously in the same law allowing banks and government to share client banking data more easily.
In new research published Tuesday and shared with TechCrunch, Dardaman and Wheeler found three security flaws which, when chained together, could be abused to open a front door with a smart lock.
Smart home technology has come under increasing scrutiny in the past year. Although convenient to some, security experts have long warned that adding an internet connection to a device increases the attack surface, making the devices less secure than their traditional counterparts. The smart home hubs that control a home’s smart devices, like water meters and even the front door lock, can be abused to allow landlords entry to a tenant’s home whenever they like.
[…]
he researchers found they could extract the hub’s private SSH key for “root” — the user account with the highest level of access — from the memory card on the device. Anyone with the private key could access a device without needing a password, said Wheeler.
They later discovered that the private SSH key was hardcoded in every hub sold to customers — putting at risk every home with the same hub installed.
Using that private key, the researchers downloaded a file from the device containing scrambled passwords used to access the hub. They found that the smart hub uses a “pass-the-hash” authentication system, which doesn’t require knowing the user’s plaintext password, only the scrambled version. By taking the scrambled password and passing it to the smart hub, the researchers could trick the device into thinking they were the homeowner.
Superhuman is one of the most talked about new apps in Silicon Valley. Why? The product — a $30 per month email app for power users hoping for greater productivity— is a good alternative to many popular and stale email apps, nearly everyone who has used it says so. Even better is the company’s publicity strategy: The service invite only and posting on social media is the quickest way to get in the door. So it gets some local buzz, a $33 million dollar investment, bigger blog write-ups and then a New York Times article to top it all off last month.
After a peak, a roller coaster hits a downward slope.
Superhuman was criticized sharply on Tuesday when a blog post by Mike Davidson, previously the VP of design at Twitter, spread widely across social media. The post goes into detail about how one of Superhuman’s powerful features was actually just a run-of-the-mill privacy-violating tracking pixel with an option to turn it off or a notification for the recipient on the other end. If you use Superhuman, you’ll be able to see when someone opened your email, how many times they did it, what device they were using and what location they’re in.
Here’s Davidson:
It is disappointing then that one of the most hyped new email clients, Superhuman, has decided to embed hidden tracking pixels inside of the emails its customers send out. Superhuman calls this feature “Read Receipts” and turns it on by default for its customers, without the consent of its recipients.
Tracking pixels are not new. If you get an email newsletter, for instance, it’s probably got a tracking pixel feeding this kind of data back to advertisers, senders, and a whole host of other trackers interested in collecting everything they can about you.
Let me put it this way: I send an email to your mother. She opens it. Now I know a ton of information about her including her whereabouts without ever her ever being informed or consenting to this tracking. What does this kind of behavior mean for nosy advertisers? What about abusive spouses? A stalker? Pushy salespeople? Intrusive co-workers and bosses?
They’ve identified a feature that provides value to some of their customers (i.e. seeing if someone has opened your email yet) and they’ve trampled the privacy of every single person they send email to in order to achieve that. Superhuman never asks the person on the other end if they are OK with sending a read receipt (complete with timestamp and geolocation). Superhuman never offers a way to opt out. Just as troublingly, Superhuman teaches its user to surveil by default. I imagine many users sign up for this, see the feature, and say to themselves “Cool! Read receipts! I guess that’s one of the things my $30 a month buys me.”
Tracking emails is a tried-and-true tactic used by a ton of companies. That doesn’t make it ethical or irreversible. There has been plenty of criticism of the strategy — and there is a technical workaround that we’ll talk about momentarily — but since the tech has been, until now, mainly visible to businesses, the conversation has paled in comparison to some of the other big privacy issues arising in recent years.
Superhuman is a consumer app. It’s targeted at power users, yes, but the potential audience is big and the buzz is real. Combined with the increasing public distaste for privacy violations in the name of building a more powerful app, Twitter has been awash this week and especially on Tuesday with criticism of Superhuman: Why does it need to take so much information without an option or notification?
We emailed Superhuman but did not get a response.
A tracking pixel works by embedding a small and hidden image in an email. The image is able to report back information including when the email is opened and where the reader is located. It’s hidden for a reason: The spy is not trying to ask permission.
If you’re willing to put in a little work, you can spot who among your contacts is using Superhuman by following these instructions.
The workaround is to disable images by default in email. The method varies in different email apps but will typically be located somewhere in the settings.
Apps like Gmail have tried for years to scrub tracking pixels. Marketers and other users sending these tracking tools out have been battling, sometimes successfully, to continue to track Gmail’s billion users without their permission.
In that case, disabling images by default is the only sure-fire way to go. When you do allow images in an email, know that you may be instantly giving up a small fortune of information to the sender — and whoever they’re working with — without even realizing it.