The remarkable ability of plants to respond to their environment has led some scientists to believe it’s a sign of conscious awareness. A new opinion paper argues against this position, saying plants “neither possess nor require consciousness.”
To explain these apparent behaviors, a subset of scientists known as plant neurobiologists has argued that plants possess a form of consciousness. Most notably, evolutionary ecologist Monica Gagliano has performed experiments that allegedly hint at capacities such as habituation (learning from experience) and classical conditioning (like Pavlov’s salivating dogs). In these experiments, plants apparently “learned” to stop curling their leaves after being dropped repeatedly or to spread their leaves in anticipation of a light source. Armed with this experimental evidence, Gagliano and others have claimed, quite controversially, that because plants can learn and exhibit other forms of intelligence, they must be conscious.
Nonsense, argues a new paper published today in Trends in Plant Science. The lead author of the new paper, biologist Lincoln Taiz from the University of California at Santa Cruz, isn’t denying plant intelligence, but makes a strong case against their being conscious.
Retail Industry Leaders Association (RILA), [is] a trade group representing the likes of Walmart, Target, Dollar General, Coca Cola and other world-swallowing corporations
[…]
RILA, as it turns out, is feeling just as freaked out by the dominance of a handful of tech giants as the rest of us, and in a letter today to the Federal Trade Commission—which, along with the Justice Department, has called dibs on potential antitrust investigations into tech firms including Amazon, Google, Facebook, and Apple—it fired its first shot in the ongoing war to break up Amazon, Google, and the rest.
While activists and, increasingly, politicians have taken up the cause of curbing the unimaginable power these companies have amassed and exerted with little oversight, this letter is tantamount to 200 of the biggest U.S. companies declaring open season on their ecommerce competitors. Importantly, RILA also represents a handful of ostensible Big Tech allies, with T-Mobile listed as a member, and Accenture and IBM executives sitting on RILA’s board.
The first major complaint RILA lodges is with search, which allows these companies—namely, Google and Amazon—to dictate what information buyers get before they even make a purchase (emphasis ours throughout):
While classical antitrust analysis assumes that customer behavior is driven by prices, the reality is that consumers can only make price-driven decisions if they have accurate, trustworthy, and timely access to information about prices […] It should thus be quite concerning to the Commission that Amazon and Google control the majority of all internet product search, and can very easily affect whether and how price information actually reaches consumers.
This isn’t a theoretical complaint either. Amazon already uses design flags like “Amazon’s choice” to differentiate certain products, many of which were found to be unreliable. Researchers from Harvard and the University of Oklahoma have also suggested that “Amazon is more likely to target successful product spaces” and “less likely to enter product spaces that require greater seller efforts to grow,” suggesting it uses data harvested via its role as a platform to inform its decisions as a seller of a growing number of private-label products.
Of course, it wouldn’t be an antitrust argument without some mention of data privacy, which is another RILA area of complaint:
[B]ecause nearly two-thirds of consumers search directly on Amazon when looking for a consumer product, it has a massive amount of data on consumer shopping needs and behaviors. According to its Privacy Notice, Amazon can and has shared consumer data with many unaffiliated companies, including the largest wireless carriers. Moreover, Amazon does not offer the consumer a choice to opt-out of this data sharing. As a result, consumers are asked to make tradeoffs that they could not anticipate or understand—provide their personal data to Amazon […] or not be allowed to shop on the most widely used platform in the world.
Lastly, RILA hits on something that, at least in the day-to-day reporting of growing anti-monopoly sentiment against tech platforms, tends to get lost: quality. RILA even earmarks this as an issue that is “frequently overlooked in favor of a focus on price.” Given that a huge swath of internet services are “free” or near-free (in exchange for your valuable data, an eyeful of ads, or both, of course), the antitrust argument that monopoly power online hurts consumers can be hard to prove in a monetary sense. Still, RILA argues:
It is worth observing how the quality of [Google, Facebook, and Amazon] have degraded as these companies shifted from fierce competitors to dominant monopolists. Google search used to be elegant and free from advertising […] Facebook co-founder Chris Hughes recently observed that Facebook’s initial innovations—including its “simple, beautiful interface”—were forged by the pressure of competition [but] has given way to advertising and interfaces that make it difficult for users to avoid content they do not wish to see.
Obviously, RILA’s place in this fight is self-serving: If anyone was hit hardest by ecommerce, it was traditional retail. Still, where reforming antitrust law for the digital age is concerned, RILA is largely right, even if it feels somewhat icky to be agreeing with Walmart about anything.
3 July 2019: The CMA has launched a market study into online platforms and the digital advertising market in the UK. We are assessing three broad potential sources of harm to consumers in connection with the market for digital advertising:
to what extent online platforms have market power in user-facing markets, and what impact this has on consumers
whether consumers are able and willing to control how data about them is used and collected by online platforms
whether competition in the digital advertising market may be distorted by any market power held by platforms
We are inviting comments by 30 July 2019 on the issues raised in the statement of scope, including from interested parties such as online platforms, advertisers, publishers, intermediaries within the ad tech stack, representative professional bodies, government and consumer groups.
Next time you use Amazon Alexa to message a friend or order a pizza, know that the record could be stored indefinitely, even if you ask to delete it.
In May, Delaware Senator Chris Coons sent Amazon CEO Jeff Bezos a letter asking why Amazon keeps transcripts of voices captured by Echo devices, citing privacy concerns over the practice. He was prompted by reports that Amazon stores the text.
“Unfortunately, recent reporting suggests that Amazon’s customers may not have as much control over their privacy as Amazon had indicated,” Coons wrote in the letter. “While I am encouraged that Amazon allows users to delete audio recordings linked to their accounts, I am very concerned by reports that suggest that text transcriptions of these audio records are preserved indefinitely on Amazon’s servers, and users are not given the option to delete these text transcripts.”
CNET first reported that Amazon’s vice president of public policy, Brian Huseman, responded to the senator on June 28, informing him that Amazon keeps the transcripts until users manually delete the information. The letter states that Amazon works “to ensure those transcripts do not remain in any of Alexa’s other storage systems.”
However, there are some Alexa-captured conversations that Amazon retains, regardless of customers’ requests to delete the recordings and transcripts, according to the letter.
As an example of records that Amazon may choose to keep despite deletion requests, Huseman mentioned instances when customers use Alexa to subscribe to Amazon’s music or delivery service, request a rideshare, order pizza, buy media, set alarms, schedule calendar events, or message friends. Huseman writes that it keeps these recordings because “customers would not want or expect deletion of the voice recording to delete the underlying data or prevent Alexa from performing the requested task.”
The letter says Amazon generally stores recordings and transcripts so users can understand what Alexa “thought it heard” and to train its machine learning systems to better understand the variations of speech “based on region, dialect, context, environment, and the individual speaker, including their age.” Such transcripts are not anonymized, according to the letter, though Huseman told Coons in his letter, “When a customer deletes a voice recording, we delete the transcripts associated with the customer’s account of both of the customer’s request and Alexa’s response.”
Amazon declined to provide a comment to Gizmodo beyond what was included in Huseman’s letter.
In his public response to the letter, Coons expressed concern that it shed light on the ways Amazon is keeping some recordings.
“Amazon’s response leaves open the possibility that transcripts of user voice interactions with Alexa are not deleted from all of Amazon’s servers, even after a user has deleted a recording of his or her voice,” Coons said. “What’s more, the extent to which this data is shared with third parties, and how those third parties use and control that information, is still unclear.”
Facebook resolves day-long outages across Instagram, WhatsApp, and Messenger
Facebook had problems loading images, videos, and other
data across its apps today, leaving some people unable to load photos in
the Facebook News Feed, view stories on Instagram, or send messages in
WhatsApp. Facebook said earlier today it was aware of the issues and was
“working to get things back to normal as quickly as possible.” It
blamed the outage on an error that was triggered during a “routine
maintenance operation.”
As of 7:49PM ET, Facebook posted a message to its
official Twitter account saying the “issue has since been resolved and
we should be back at 100 percent for everyone. We’re sorry for any
inconvenience.” Instagram similarly said its issues were more or less
resolved, too.
Earlier today, some people and businesses
experienced trouble uploading or sending images, videos and other files
on our apps. The issue has since been resolved and we should be back at
100% for everyone. We’re sorry for any inconvenience.— Facebook Business (@FBBusiness) July 3, 2019
We’re back! The issue has been resolved and we should be back at 100% for everyone. We’re sorry for any inconvenience. pic.twitter.com/yKKtHfCYMA— Instagram (@instagram) July 3, 2019
The issues started around 8AM ET and began slowly
clearing up after a couple hours, according to DownDetector, which
monitors website and app issues. The errors aren’t affecting all images;
many pictures on Facebook and Instagram still load, but others are
appearing blank. DownDetector has also received reports of people being
unable to load messages in Facebook Messenger.
The outage persisted through mid-day, with Facebook releasing a second statement, where it apologized “for any inconvenience.” Facebook’s platform status website still lists a “partial outage,” with a note saying that the company is “working on a fix that will go out shortly.”
Apps and websites are always going to experience
occasional disruptions due to the complexity of services they’re
offering. But even when they’re brief, they can become a real problem
due to the huge number of users many of these services have. A Facebook
outage affects a suite of popular apps, and those apps collectively have
billions of users who rely on them. That’s a big deal when those
services have become critical for businessand communications, and every hour they’re offline or acting strange can mean real inconveniences or lost money.
We’re aware that some people are having trouble
uploading or sending images, videos and other files on our apps. We’re
sorry for the trouble and are working to get things back to normal as
quickly as possible. #facebookdown— Facebook (@facebook) July 3, 2019
The issue caused some images and features to break across all of Facebook’s apps
Well, folks, Facebook and its “family of apps” has experienced yet
another crash. A nice respite moving into the long holiday weekend if
you ask me.
Problems that appear to have started early Wednesday
morning were still being reported as of the afternoon, with Instagram,
Facebook, WhatsApp, Oculus, and Messenger all experiencing issues.
According to DownDetector, issues first started cropping up on Facebook at around 8am ET.
“We’re
aware that some people are having trouble uploading or sending images,
videos and other files on our apps. We’re sorry for the trouble and are
working to get things back to normal as quickly as possible,” Facebook tweeted just after noon on Wednesday. A similar statement was shared from Instagram’s Twitter account.
You know what we definitely need more of on social media? Influencers and ads. And lucky for us,…Read more
Oculus, Facebook’s VR property, separately tweeted that it was experiencing “issues around downloading software.”
Facebook’s
crash was still well underway as of 1pm ET on Wednesday, primarily
affecting images. Where users typically saw uploaded images, such as
their profile pictures or in their photo albums, they instead saw a
string of terms describing Facebook’s interpretation of the image. Like
this:
TechCrunch’s
Zack Whittaker noted on Twitter that all of those image tags you may
have seen were Facebook’s machine learning at work.
This week’s crash is just the latest in what has become a near semi-frequent occurrence of outages. The first occurred back in March in an incident that Facebook later blamed on “a server configuration change.” Facebook and its subsidiaries went down again about a month later, though the previous incident was much worse, with millions of reports on DownDetector.
Two weeks ago, Instagram was bricked
and experienced ongoing issues with refreshing feeds, loading profiles,
and liking images. While the feed refresh issue was quickly patched, it
was hours before the company confirmed that Instagram had been fully
restored.
We’ve reached out to Facebook for more information about the issues and will update this post if we hear back.
Code crash? Russian hackers? Nope. Good ol’ broken fiber cables borked Google Cloud’s networking today
Fiber-optic cables linking Google Cloud servers in its us-east1
region physically broke today, slowing down or effectively cutting off
connectivity with the outside world.
For at least the past nine hours, and counting,
netizens and applications have struggled to connect to systems and
services hosted in the region, located on America’s East Coast.
Developers and system admins have been forced to migrate workloads to
other regions, or redirect traffic, in order to keep apps and websites
ticking over amid mitigations deployed by the Silicon Valley giant.
By 0900 PDT, Google revealed the extent of the
blunder: its cloud platform had “lost multiple independent fiber links
within us-east1 zone.” The fiber provider, we’re told, “has been
notified and are currently investigating the issue. In order to restore
service, we have reduced our network usage and prioritised customer
workloads.”
By that, we understand, Google means it redirected
traffic destined for its Google.com services hosted in the data center
region, to other locations, allowing the remaining connectivity to carry
customer packets.
By midday, Pacific Time, Google updated its status
pages to note: “Mitigation work is currently underway by our engineering
team to address the issue with Google Cloud Networking and Load
Balancing in us-east1. The rate of errors is decreasing, however some
users may still notice elevated latency.”
However, at time of writing, the physically damaged
cabling is not yet fully repaired, and US-east1 networking is thus still
knackered. In fact, repairs could take as much as 24 hours to complete.
The latest update, posted 1600 PDT, reads as follows:
The disruptions with Google Cloud Networking and Load Balancing have
been root caused to physical damage to multiple concurrent fiber bundles
serving network paths in us-east1, and we expect a full resolution
within the next 24 hours.
In the meantime, we are electively rerouting traffic to ensure that
customers’ services will continue to operate reliably until the affected
fiber paths are repaired. Some customers may observe elevated latency
during this period.
Customers using Google Cloud’s Load Balancing service
will automatically fall over to other regions, if configured,
minimizing impact on their workloads, it is claimed. They can also migrate to, say US-east4, though they may have to rejig their code and scripts to reference the new region.
The Register asked Google for more details
about the damaged fiber, such as how it happened. A spokesperson told us
exactly what was already on the aforequoted status pages.
Meanwhile, a Google Cloud subscriber wrote a little ditty about the outage to the tune of Pink Floyd’s Another Brick in the Wall. It starts: “We don’t need no cloud computing…” ®
This major Cloudflare internet routing blunder took A WEEK to fix. Why so long? It was IPv6 – and no one really noticed
Last week, an internet routing screw-up propagated by Verizon for three hours sparked havoc online, leading to significant press attention and industry calls for greater network security.
A few weeks before that, another packet routing blunder,
this time pushed by China Telecom, lasted two hours, caused significant
disruption in Europe and prompted some to wonder whether Beijing’s
spies were abusing the internet’s trust-based structure to carry out
surveillance.
In both cases, internet engineers were shocked at how
long it took to fix traffic routing errors that normally only last
minutes or even seconds. Well, that was nothing compared to what
happened this week.
Cloudflare’s director of network engineering Jerome
Fleury has revealed that the routing for a big block of IP addresses was
wrongly announced for an ENTIRE WEEK and, just as amazingly, the
company that caused it didn’t notice until the major blunder was pointed
out by another engineer at Cloudflare. (This cock-up is completely
separate to today’s Cloudflare outage.)
How is it even possible for network routes to remain completely wrong for several days? Because, folks, it was on IPv6.
“So Airtel AS9498 announced the entire IPv6 block
2400::/12 for a week and no-one notices until Tom Strickx finds out and
they confirm it was a typo of /127,” Fleury tweeted over the weekend, complete with graphic showing the massive routing error.
That /12 represents 83 decillion IP addresses, or
four quadrillion /64 networks. The /127 would be 2. Just 2 IP addresses.
Slight difference. And while this demonstrates the expansiveness of
IPv6’s address space, and perhaps even its robustness seeing as nothing
seems to have actually broken during the routing screw-up, it also hints
at just how sparse IPv6 is right now.
To be fair to Airtel, it often takes someone else to
notice a network route error – typically caused by simple typos like
failing to add a “7” – because the organization that messes up the
tables tends not to see or feel the impact directly.
But if ever there was a symbol of how miserably the
transition from IPv4 to IPv6 is going, it’s in the fact that a fat IPv6
routing error went completely unnoticed for a week while an IPv4 error will usually result in phone calls, emails, and outcry on social media within minutes.
And sure, IPv4 space is much, much more dense than IPv6 so obviously people will spot errors much faster. But no one at all noticed the advertisement of a /12 for days? That may not bode well for the future, even though, yes, this particular /127 typo had no direct impact.
I got 502 problems, and Cloudflare sure is one: Outage interrupts your El Reg-reading pleasure for almost half an hour
Updated Cloudflare, the outfit noted
for the slogan “helping build a better internet”, had another wobble
today as “network performance issues” rendered websites around the globe
inaccessible.
The US tech biz updated its status page at 1352 UTC
to indicate that it was aware of issues, but things began tottering
quite a bit earlier. Since Cloudflare handles services used by a good
portion of the world’s websites, such as El Reg, including
content delivery, DNS and DDoS protection, when it sneezes, a chunk of
the internet has to go and have a bit of a lie down. That means netizens
were unable to access many top sites globally.
A stumble last week was attributed to the antics of Verizon by CTO John Graham-Cumming. As for today’s shenanigans? We contacted the company, but they’ve yet to give us an explanation.
While Cloudflare implemented a fix by 1415 UTC and declared things resolved by 1457 UTC, a good portion of internet users noticed things had gone very south for many, many sites.
The company’s CEO took to Twitter to proffer an
explanation for why things had fallen over, fingering a colossal spike
in CPU usage as the cause while gently nudging the more wild conspiracy
theories away from the whole DDoS thing.
However, the outage was a salutary reminder of the
fragility of the internet as even Firefox fans found their beloved
browser unable to resolve URLs.
Ever keen to share in the ups and downs of life, even Cloudflare’s site also reported the dread 502 error.
As with the last incident, users who endured the
less-than-an-hour of disconnection would do well to remember that the
internet is a brittle thing. And Cloudflare would do well to remember
that its customers will be pondering if maybe they depend on its
services just a little too much.
Updated to add at 1702 BST
Following publication of this article, Cloudflare released a blog post
stating the “CPU spike was caused by a bad software deploy that was
rolled back. Once rolled back the service returned to normal operation
and all domains using Cloudflare returned to normal traffic levels.”
Naturally it then added….
“We are incredibly sorry that this incident occurred. Internal teams are meeting as I write performing a full post-mortem to understand how this occurred and how we prevent this from ever occurring again.” ®
Cloudflare gave everyone a 30-minute break from a chunk of the internet yesterday: Here’s how they did it
Internet services outfit Cloudflare took careful aim and unloaded
both barrels at its feet yesterday, taking out a large chunk of the
internet as it did so.
In an impressive act of openness, the company posted a distressingly detailed post-mortem on the cockwomblery that led to the outage. The Register also spoke to a weary John Graham-Cumming, CTO of the embattled company, to understand how it all went down.
This time it wasn’t Verizon wot dunnit; Cloudflare engineered this outage all by itself.
In a nutshell, what happened was that Cloudflare
deployed some rules to its Web Application Firewall (WAF). The gang
deploys these rules to servers in a test mode – the rule gets fired but
doesn’t take any action – in order to measure what happens when real
customer traffic runs through it.
We’d contend that an isolated test environment into
which one could direct traffic would make sense, but Graham-Cumming told
us: “We do this stuff all the time. We have a sequence of ways in which
we deploy stuff. In this case, it didn’t happen.”
In a frank admission that should send all DevOps
enthusiasts scurrying to look at their pipelines, Graham-Cumming told
us: “We’re really working on understanding how the automated test suite
which runs internally didn’t pick up the fact that this was going to
blow up our service.”
The CTO elaborated: “We push something out, it gets
approved by a human, and then it goes through a testing procedure, and
then it gets pushed out to the world. And somehow in that testing
procedure, we didn’t spot that this was going to blow things up.”
“And that didn’t happen in this instance. This should have been caught easily.”
Alas, two things went wrong. Firstly, one of the
rules (designed to block nefarious inline JavaScript) contained a
regular expression that would send CPU usage sky high. Secondly, the new
rules were accidentally deployed globally in one go.
The result? “One of these rules caused the CPU spike to 100 per cent, on all of our machines.” And because Cloudflare’s products are distributed over all its servers, every service was starved of CPU while the offending regular expression did its thing.
In order to create what it calls “the world’s lightest gaming mouse,” the engineers at peripheral maker Glorious PC Gaming Race took a mouse and put holes all in it. The result is the Model O, a very good gaming mouse that weighs only 67 grams and may trigger trypophobia.
“You’ll barely feel the holes,” reads the copy on the Model O’s product page, answering the question I imagine most people have when looking at the honeycombed plastic shell. I’ve used the ultra-light accessory for a couple weeks now, and the product page is correct. It feels slightly bumpy under the palm.
Only when I look directly at the Model O do I feel mildly disturbed by the pattern of holes covering the top and its underside. The effect is less jarring when the RGB lighting is cycling. While I’m actively using the mouse, my giant hands cover it completely. Glorious PC Gaming Race says the holes allow for better airflow, keeping hands cool, but my massive paws negate that benefit. I worry about dirt getting in the holes, but that’s nothing I can’t avoid by not being a total slob. Perhaps it’s time.
The Model O slides over my mouse pad effortlessly thanks to its ridiculously low weight and the rounded plastic feet, which Glorious PC Gaming Race calls “G-Skates.” I particularly enjoy the mouse’s cable, a proprietary braided affair that feels like a normal thin wire wrapped in a shoelace. It doesn’t tangle, which is an issue with many mice and one of the main reasons I prefer a stationary trackball.
Beneath the unique design and proprietary bits, the Model O is a very nice six-button gaming mouse. It’s got a Pixart sensor that can be adjusted as sensitive as 12,000 DPI (dots per inch), with more sensible presets of 400, 800, 1,600, and 3,200 cyclable via a button on the bottom of the unit (software is required to go higher). It’s fast and responsive.
Glorious PC Gaming Race Model O Specs
Sensor: Pixart PMW-3360 Sensor
Switch Type (Main): Omron Mechanical Rated For 20 Million Clicks
Number of Buttons: 6
Max Tracking Speed: 250+ IPS
Weight: 67grams (Matte) and 68 grams (Glossy)
Acceleration: 50G
Max DPI: 12,000
Polling Rate: 1000hz (1ms)
Lift off Distance: ~0.7mm
Price: $50 Matte, $60 Glossy.
Note that the Model O comes in four styles: black or white matte finish and black or white glossy. The glossy versions cost $10 more than the $50 matte versions and weigh 68 grams instead of 67. In other words, the glossy versions are not the “world’s lightest gaming mouse” and should be exiled.
The Glorious PC Gaming Race Model O is the lightest gaming mouse I’ve used. I’m not sure I’m the type of hardcore mouse user that would benefit from the reduced weight. In fact, many of the gaming mice I’ve evaluated over the past several years have come packaged with weights to make them heavier. If you prefer a more lightweight pointing device and don’t mind all the holes, the Model O could be for you. And if not, you can probably fill it with clay or something to weigh it down.
YouTube, under fire since inception for building a business on other people’s copyrights and in recent years for its vacillating policies on irredeemable content, recently decided it no longer wants to host instructional hacking videos.
The written policy first appears in the Internet Wayback Machine’s archive of web history in an April 5, 2019 snapshot. It forbids: “Instructional hacking and phishing: Showing users how to bypass secure computer systems or steal user credentials and personal data.”
Lack of clarity about the permissibility of cybersecurity-related content has been an issue for years. In the past, hacking videos in years past could be removed if enough viewers submitted reports objecting to them or if moderators found the videos violated other articulated policies.
Now that there’s a written rule, there’s renewed concern about how the policy is being applied.
Kody Kinzie, a security researcher and educator who posts hacking videos to YouTube’s Null Byte channel, on Tuesday said a video created for the US July 4th holiday to demonstrate launching fireworks over Wi-Fi couldn’t be uploaded because of the rule.
“I’m worried for everyone that teaches about infosec and tries to fill in the gaps for people who are learning,” he said via Twitter. “It is hard, often boring, and expensive to learn cybersecurity.”
In an email to The Register, Kinzie clarified that YouTube had problems with three previous videos, which got flagged and are either in the process of review or have already been appealed and restored. They involved Wi-Fi hacking. One of the Wi-Fi hacking videos got a strike on Tuesday and that disabled uploading for the account, preventing the fireworks video from going up.
The Register asked Google’s YouTube for comment but we’ve not heard back.
Security professionals find the policy questionable. “Very simply, hacking is not a derogatory term and shouldn’t be used in a policy about what content is acceptable,” said Tim Erlin, VP of product management and strategy at cybersecurity biz Tripwire, in an email to The Register.
“Google’s intention here might be laudable, but the result is likely to stifle valuable information sharing in the information security community.”
Spotify has changed the way artists can upload music, now prohibiting individual musicians from putting their songs on the streaming service directly.
The new move requires a third party to be involved in the business of uploads.
The company announced the change on Monday, saying it will close the beta program and stop accepting direct uploads by the end of July.
“The most impactful way we can improve the experience of delivering music to Spotify for as many artists and labels as possible is to lean into the great work our distribution partners are already doing to serve the artist community,” Spotify said in a statement on its blog. “Over the past year, we’ve vastly improved our work with distribution partners to ensure metadata quality, protect artists from infringement, provide their users with instant access to Spotify for Artists, and more.”
“The best way for us to serve artists and labels is to focus our resources on developing tools in areas where Spotify can uniquely benefit them — like Spotify for Artists (which more than 300,000 creators use to gain new insight into their audience) and our playlist submission tool (which more than 36,000 artists have used to get playlisted for the very first time since it launched a year ago). We have a lot more planned here in the coming months,” the post continued.
The direct upload function began last September, allowing independent artists to utilize the streaming site without distribution methods.
Smaller artists will now need to return to sites like Bandcamp, SoundCloud and others to upload their material.
Many people, especially artists, were upset about the decision. You can see what they had to say on Twitter below.
spotify discontinuing their direct upload beta while removing any song uploaded through it shows again how spotify does not give a single fuck about artists
for me the biggest takeaway from Spotify closing its direct upload beta is that the company isn’t actually as globally influential as it thought, with respect to convincing artists that uploading *only* to Spotify was anywhere near enough to sustain their careers + satisfy fans.
@Spotify sucks. Y’all making artist go through third party sites to upload their music and pay on top of that. As if the third party sites aren’t going to charge as well
Spotify turning around and leaving distributors to do their job, by pulling the plug on their beta upload tool is music to my ears, but we saw it coming
Pre-saving an upcoming release from your favorite artists on Spotify could be causing you to share more personal data than you realize.
In a recent report from Billboard, it was revealed that Spotify users were giving a band’s label data use permissions that were much broader than typical permissions.
When a user pre-saves a track, it adds it to the user’s library the moment it comes out. In order to do this, Spotify users have to click through and approve certain permissions.
These permissions give the label more access to your account than Spotify normally gives. It allows them to track listening habits, change the artists they follow and potentially control their streaming remotely.
The authority on personal data has reprimanded the ING Bank over plans to use payment data for advertising. The authority has told other banks to examine their policies for direct marketing. ING Bank recently changed their privacy statement, stating that the bank will use payment data for direct marketing offers. As an example they said being able to offer specific product offers after child support payments had come in. Many ING customers caught this and emailed and called the authority about this angrily.
This is the second time the ING has tried this: in 2014 they tried to do this, but then also sharing the payment data with third parties.
In the meantime, the Dutch government is trying to find a way to prohibit cash payments of over EUR 3000,- and insiduously in the same law allowing banks and government to share client banking data more easily.
In new research published Tuesday and shared with TechCrunch, Dardaman and Wheeler found three security flaws which, when chained together, could be abused to open a front door with a smart lock.
Smart home technology has come under increasing scrutiny in the past year. Although convenient to some, security experts have long warned that adding an internet connection to a device increases the attack surface, making the devices less secure than their traditional counterparts. The smart home hubs that control a home’s smart devices, like water meters and even the front door lock, can be abused to allow landlords entry to a tenant’s home whenever they like.
[…]
he researchers found they could extract the hub’s private SSH key for “root” — the user account with the highest level of access — from the memory card on the device. Anyone with the private key could access a device without needing a password, said Wheeler.
They later discovered that the private SSH key was hardcoded in every hub sold to customers — putting at risk every home with the same hub installed.
Using that private key, the researchers downloaded a file from the device containing scrambled passwords used to access the hub. They found that the smart hub uses a “pass-the-hash” authentication system, which doesn’t require knowing the user’s plaintext password, only the scrambled version. By taking the scrambled password and passing it to the smart hub, the researchers could trick the device into thinking they were the homeowner.
Superhuman is one of the most talked about new apps in Silicon Valley. Why? The product — a $30 per month email app for power users hoping for greater productivity— is a good alternative to many popular and stale email apps, nearly everyone who has used it says so. Even better is the company’s publicity strategy: The service invite only and posting on social media is the quickest way to get in the door. So it gets some local buzz, a $33 million dollar investment, bigger blog write-ups and then a New York Times article to top it all off last month.
After a peak, a roller coaster hits a downward slope.
Superhuman was criticized sharply on Tuesday when a blog post by Mike Davidson, previously the VP of design at Twitter, spread widely across social media. The post goes into detail about how one of Superhuman’s powerful features was actually just a run-of-the-mill privacy-violating tracking pixel with an option to turn it off or a notification for the recipient on the other end. If you use Superhuman, you’ll be able to see when someone opened your email, how many times they did it, what device they were using and what location they’re in.
Here’s Davidson:
It is disappointing then that one of the most hyped new email clients, Superhuman, has decided to embed hidden tracking pixels inside of the emails its customers send out. Superhuman calls this feature “Read Receipts” and turns it on by default for its customers, without the consent of its recipients.
Tracking pixels are not new. If you get an email newsletter, for instance, it’s probably got a tracking pixel feeding this kind of data back to advertisers, senders, and a whole host of other trackers interested in collecting everything they can about you.
Let me put it this way: I send an email to your mother. She opens it. Now I know a ton of information about her including her whereabouts without ever her ever being informed or consenting to this tracking. What does this kind of behavior mean for nosy advertisers? What about abusive spouses? A stalker? Pushy salespeople? Intrusive co-workers and bosses?
They’ve identified a feature that provides value to some of their customers (i.e. seeing if someone has opened your email yet) and they’ve trampled the privacy of every single person they send email to in order to achieve that. Superhuman never asks the person on the other end if they are OK with sending a read receipt (complete with timestamp and geolocation). Superhuman never offers a way to opt out. Just as troublingly, Superhuman teaches its user to surveil by default. I imagine many users sign up for this, see the feature, and say to themselves “Cool! Read receipts! I guess that’s one of the things my $30 a month buys me.”
Tracking emails is a tried-and-true tactic used by a ton of companies. That doesn’t make it ethical or irreversible. There has been plenty of criticism of the strategy — and there is a technical workaround that we’ll talk about momentarily — but since the tech has been, until now, mainly visible to businesses, the conversation has paled in comparison to some of the other big privacy issues arising in recent years.
Superhuman is a consumer app. It’s targeted at power users, yes, but the potential audience is big and the buzz is real. Combined with the increasing public distaste for privacy violations in the name of building a more powerful app, Twitter has been awash this week and especially on Tuesday with criticism of Superhuman: Why does it need to take so much information without an option or notification?
We emailed Superhuman but did not get a response.
A tracking pixel works by embedding a small and hidden image in an email. The image is able to report back information including when the email is opened and where the reader is located. It’s hidden for a reason: The spy is not trying to ask permission.
If you’re willing to put in a little work, you can spot who among your contacts is using Superhuman by following these instructions.
The workaround is to disable images by default in email. The method varies in different email apps but will typically be located somewhere in the settings.
Apps like Gmail have tried for years to scrub tracking pixels. Marketers and other users sending these tracking tools out have been battling, sometimes successfully, to continue to track Gmail’s billion users without their permission.
In that case, disabling images by default is the only sure-fire way to go. When you do allow images in an email, know that you may be instantly giving up a small fortune of information to the sender — and whoever they’re working with — without even realizing it.