They say they applaud a recent “needed reckoning” on racial justice, but argue it has fuelled stifling of open debate.
The letter denounces “a vogue for public shaming and ostracism” and “a blinding moral certainty”.
Several signatories have been attacked for comments that caused offence.
“The free exchange of information and ideas, the lifeblood of a liberal society, is daily becoming more constricted,” says the letter.
US intellectual Noam Chomsky, eminent feminist Gloria Steinem, Russian chess grandmaster Garry Kasparov and author Malcolm Gladwell also put their names to the letter, which was published on Tuesday in Harper’s Magazine.
The appearance of Harry Potter author Rowling’s name among signatories comes after she recently found herself under attack online for comments that offended transgender people.
Her fellow British writer, Martin Amis, also signed the letter.
It also says: “We uphold the value of robust and even caustic counter-speech from all quarters.
“But it is now all too common to hear calls for swift and severe retribution in response to perceived transgressions of speech and thought.”
The letter condemns “disproportionate punishments” meted out by institutional leaders conducting “panicked damage control”.
Media captionWatch former US President Obama talk about “woke” culture
It continues: “Editors are fired for running controversial pieces; books are withdrawn for alleged inauthenticity; journalists are barred from writing on certain topics; professors are investigated for quoting works of literature in class; a researcher is fired for circulating a peer-reviewed academic study; and the heads of organizations are ousted for what are sometimes just clumsy mistakes.”
It was signed by New York Times op-ed contributors David Brooks and Bari Weiss. The newspaper’s editorial page editor was recently removed amid uproar after publishing an opinion piece by Republican Senator Tom Cotton.
Media captionWhat it’s like to be “cancelled”?
“We are already paying the price in greater risk aversion among writers, artists, and journalists who fear for their livelihoods if they depart from the consensus, or even lack sufficient zeal in agreement,” the letter says.
It adds: “We need to preserve the possibility of good-faith disagreement without dire professional consequences.”
Media captionPresident Trump: “Angry mobs are trying to tear down statues of our founders.”
One signatory – Matthew Yglesias, co-founder of liberal news analysis website Vox – was rebuked by a colleague on Tuesday for putting his name to the letter.
Vox critic at large Emily VanDerWerff, a trans woman, tweeted that she had written a letter to the publication’s editors to say that Yglesias signing the letter “makes me feel less safe at Vox”.
But VanDerWerff said she did not want Yglesias to be fired or apologise because it would only convince him he was being “martyred”.
One signatory recanted within hours of the letter being published.
Jennifer Finney Boylan, a US author and transgender activist, tweeted: “I did not know who else had signed that letter.
“I thought I was endorsing a well-meaning, if vague, message against internet shaming.”
This is part of a weaponisation of offensive feelings where moralistic high horse people feels that saying that they’re offended by something allows them to transgress the bounds of normal behaviour.
A ringing bell vibrates simultaneously at a low-pitched fundamental tone and at many higher-pitched overtones, producing a pleasant musical sound. A recent study, just published in the Journal of the Atmospheric Sciences by scientists at Kyoto University and the University of Hawai’i at Mānoa, shows that the Earth’s entire atmosphere vibrates in an analogous manner, in a striking confirmation of theories developed by physicists over the last two centuries.
In the case of the atmosphere, the “music” comes not as a sound we could hear, but in the form of large-scale waves of atmospheric pressure spanning the globe and traveling around the equator, some moving east-to-west and others west-to-east. Each of these waves is a resonant vibration of the global atmosphere, analogous to one of the resonant pitches of a bell. The basic understanding of these atmospheric resonances began with seminal insights at the beginning of the 19th century by one of history’s greatest scientists, the French physicist and mathematician Pierre-Simon Laplace. Research by physicists over the subsequent two centuries refined the theory and led to detailed predictions of the wave frequencies that should be present in the atmosphere. However, the actual detection of such waves in the real world has lagged behind the theory.
Now in a new study by Takatoshi Sakazaki, an assistant professor at the Kyoto University Graduate School of Science, and Kevin Hamilton, an Emeritus Professor in the Department of Atmospheric Sciences and the International Pacific Research Center at the University of Hawai?i at Mānoa, the authors present a detailed analysis of observed atmospheric pressure over the globe every hour for 38 years. The results clearly revealed the presence of dozens of the predicted wave modes.
The study focused particularly on waves with periods between 2 hours and 33 hours which travel horizontally through the atmosphere, moving around the globe at great speeds (exceeding 700 miles per hour). This sets up a characteristic “chequerboard” pattern of high and low pressure associated with these waves as they propagate (see figure).
Pressure patterns for 4 of the modes as they propagate around the globe. Credit: Sakazaki and Hamilton (2020)
“For these rapidly moving wave modes, our observed frequencies and global patterns match those theoretically predicted very well,” stated lead author Sakazaki. “It is exciting to see the vision of Laplace and other pioneering physicists so completely validated after two centuries.”
But this discovery does not mean their work is done.
“Our identification of so many modes in real data shows that the atmosphere is indeed ringing like a bell,” commented co-author Hamilton. “This finally resolves a longstanding and classic issue in atmospheric science, but it also opens a new avenue of research to understand both the processes that excite the waves and the processes that act to damp the waves.”
So let the atmospheric music play on!
More information: Takatoshi Sakazaki et al, An Array of Ringing Global Free Modes Discovered in Tropical Surface Pressure Data, Journal of the Atmospheric Sciences (2020). DOI: 10.1175/JAS-D-20-0053.1
Cops in Detroit have admitted using facial-recognition technology that fails to accurately identify potential suspects a whopping 96 per cent of the time.
The revelation was made by the American police force’s chief James Craig during a public hearing, this week. Craig was grilled over the wrongful arrest of Robert Williams, who was mistaken as a shoplifter by facial-recognition software used by officers.
“If we would use the software only [to identify subjects], we would not solve the case 95-97 per cent of the time,” Craig said, Vice first reported. “That’s if we relied totally on the software, which would be against our current policy … If we were just to use the technology by itself, to identify someone, I would say 96 per cent of the time it would misidentify.”
The software was developed by DataWorks Plus, a biometric technology biz based in South Carolina. Multiple studies have demonstrated facial-recognition algorithms often struggle with identifying women and people with darker skin compared to Caucasian men.
Fraunhofer HHI (together with partners from industry including Apple, Ericsson, Intel, Huawei, Microsoft, Qualcomm, and Sony) is celebrating the release and official adoption of the new global video coding standard H.266/Versatile Video Coding (VVC). This new standard offers improved compression, which reduces data requirements by around 50% of the bit rate relative to the previous standard H.265/High Efficiency Video Coding (HEVC) without compromising visual quality. In other words, H.266/VVC offers faster video transmission for equal perceptual quality. Overall, H.266/VVC provides efficient transmission and storage of all video resolutions from SD to HD up to 4K and 8K, while supporting high dynamic range video and omnidirectional 360° video.
[…]
Through a reduction of data requirements, H.266/VVC makes video transmission in mobile networks (where data capacity is limited) more efficient. For instance, the previous standard H.265/HEVC requires ca. 10 gigabytes of data to transmit a 90-min UHD video. With this new technology, only 5 gigabytes of data are required to achieve the same quality. Because H.266/VVC was developed with ultra-high-resolution video content in mind, the new standard is particularly beneficial when streaming 4K or 8K videos on a flat screen TV. Furthermore, H.266/VVC is ideal for all types of moving images: from high-resolution 360° video panoramas to screen sharing contents.
I’ve seen a lot of people — including those who are supporting the publishers’ legal attack on the Internet Archive — insist that they “support libraries,” but that the Internet Archive’s Open Library and National Emergency Library are “not libraries.” First off, they’re wrong. But, more importantly, it’s good to see actual librarians now coming out in support of the Internet Archive as well. The Association of Research Libraries has put out a statement asking publishers to drop this counter productive lawsuit, especially since the Internet Archive has shut down the National Emergency Library.
The Association of Research Libraries (ARL) urges an end to the lawsuit against the Internet Archive filed early this month by four major publishers in the United States District Court Southern District of New York, especially now that the National Emergency Library (NEL) has closed two weeks earlier than originally planned.
As the ARL points out, the Internet Archive has been an astounding “force for good” for the dissemination of knowledge and culture — and that includes introducing people to more books.
For nearly 25 years, the Internet Archive (IA) has been a force for good by capturing the world’s knowledge and providing barrier-free access for everyone, contributing services to higher education and the public, including the Wayback Machine that archives the World Wide Web, as well as a host of other services preserving software, audio files, special collections, and more. Over the past four weeks, IA’s Open Library has circulated more than 400,000 digital books without any user cost—including out-of-copyright works, university press titles, and recent works of academic interest—using controlled digital lending (CDL). CDL is a practice whereby libraries lend temporary digital copies of print books they own in a one-to-one ratio of “loaned to owned,” and where the print copy is removed from circulation while the digital copy is in use. CDL is a practice rooted in the fair use right of the US Copyright Act and recent judicial interpretations of that right. During the COVID-19 pandemic, many academic and research libraries have relied on CDL (including IA’s Open Library) to ensure academic and research continuity at a time when many physical collections have been inaccessible.
As ARL and our partner library associations acknowledge, many publishers (including some involved in the lawsuit) are contributing to academic continuity by opening more content during this crisis. As universities and libraries work to ensure scholars and students have the information they need, ARL looks forward to working with publishers to ensure open and equitable access to information. Continuing the litigation against IA for the purpose of recovering statutory damages and shuttering the Open Library would interfere with this shared mutual objective.
It would be nice if the publishers recognized this, but as we’ve said over and over again, these publishers would sue any library if libraries didn’t already exist. The fact that the Open Library looks just marginally different from a traditional library, means they’re unlikely to let go of this stupid, counterproductive lawsuit.
In one of the largest law enforcement busts ever, European police and crime agencies hacked an encrypted communications platform used by thousands of criminals and drug traffickers. By infiltrating the platform, Encrochat, police across Europe gained access to a hundred million encrypted messages. In the UK, those messages helped officials arrest 746 suspects, seize £54 million (about $67 million) and confiscate 77 firearms and two tonnes of Class A and B drugs, the National Crime Agency (NCA) reported. According to Vice, police also made arrests in France, the Netherlands, Norway and Sweden.
Encrochat promised highly secure phones that, as Vice explains, were essentially modified Android devices. The company installed its own encrypted messaging platform, removed the GPS, camera and microphone functions and offered features like the ability to wipe the device with a PIN. The phones could make VOIP calls and send texts, but they did little else. They ran two operating systems, one of which appeared normal to evade suspicion. Encrochat used a subscription model, which cost thousands of dollars per year, and users seemed to think that it was foolproof.
Law enforcement agencies began collecting data from Encrochat on April 1st. According to the BBC, the encryption code was likely cracked in early March. It’s not clear exactly how officials hacked the platform, which is now shut down.
As Alexa, Google Home, Siri, and other voice assistants have become fixtures in millions of homes, privacy advocates have grown concerned that their near-constant listening to nearby conversations could pose more risk than benefit to users. New research suggests the privacy threat may be greater than previously thought.
The findings demonstrate how common it is for dialog in TV shows and other sources to produce false triggers that cause the devices to turn on, sometimes sending nearby sounds to Amazon, Apple, Google, or other manufacturers. In all, researchers uncovered more than 1,000 word sequences—including those from Game of Thrones, Modern Family, House of Cards, and news broadcasts—that incorrectly trigger the devices.
“The devices are intentionally programmed in a somewhat forgiving manner, because they are supposed to be able to understand their humans,” one of the researchers, Dorothea Kolossa, said. “Therefore, they are more likely to start up once too often rather than not at all.”
That which must not be said
Examples of words or word sequences that provide false triggers include
Alexa: “unacceptable,” “election,” and “a letter”
Google Home: “OK, cool,” and “Okay, who is reading”
Siri: “a city” and “hey jerry”
Microsoft Cortana: “Montana”
The two videos below show a GoT character saying “a letter” and Modern Family character uttering “hey Jerry” and activating Alexa and Siri, respectively.
Accidental Trigger #1 – Alexa – Cloud
Accidental Trigger #3 – Hey Siri – Cloud
In both cases, the phrases activate the device locally, where algorithms analyze the phrases; after mistakenly concluding that these are likely a wake word, the devices then send the audio to remote servers where more robust checking mechanisms also mistake the words for wake terms. In other cases, the words or phrases trick only the local wake word detection but not algorithms in the cloud.
Unacceptable privacy intrusion
When devices wake, the researchers said, they record a portion of what’s said and transmit it to the manufacturer. The audio may then be transcribed and checked by employees in an attempt to improve word recognition. The result: fragments of potentially private conversations can end up in the company logs.
The research paper, titled “Unacceptable, where is my privacy?,” is the product of Lea Schönherr, Maximilian Golla, Jan Wiele, Thorsten Eisenhofer, Dorothea Kolossa, and Thorsten Holz of Ruhr University Bochum and Max Planck Institute for Security and Privacy. In a brief write-up of the findings, they wrote:
Our setup was able to identify more than 1,000 sequences that incorrectly trigger smart speakers. For example, we found that depending on the pronunciation, «Alexa» reacts to the words “unacceptable” and “election,” while «Google» often triggers to “OK, cool.” «Siri» can be fooled by “a city,” «Cortana» by “Montana,” «Computer» by “Peter,” «Amazon» by “and the zone,” and «Echo» by “tobacco.” See videos with examples of such accidental triggers here.
In our paper, we analyze a diverse set of audio sources, explore gender and language biases, and measure the reproducibility of the identified triggers. To better understand accidental triggers, we describe a method to craft them artificially. By reverse-engineering the communication channel of an Amazon Echo, we are able to provide novel insights on how commercial companies deal with such problematic triggers in practice. Finally, we analyze the privacy implications of accidental triggers and discuss potential mechanisms to improve the privacy of smart speakers.
The researchers analyzed voice assistants from Amazon, Apple, Google, Microsoft, and Deutsche Telekom, as well as three Chinese models by Xiaomi, Baidu, and Tencent. Results published on Tuesday focused on the first four. Representatives from Apple, Google, and Microsoft didn’t immediately respond to a request for comment.
The full paper hasn’t yet been published, and the researchers declined to provide a copy ahead of schedule. The general findings, however, already provide further evidence that voice assistants can intrude on users’ privacy even when people don’t think their devices are listening. For those concerned about the issue, it may make sense to keep voice assistants unplugged, turned off, or blocked from listening except when needed—or to forgo using them at all.
I’ve used a Samsung Galaxy smartphone almost every day for nearly 4 years. I used them because Samsung had fantastic hardware that was matched by (usually) excellent software. But in 2020, a Samsung phone is no longer my daily driver, and there’s one simple reason that’s the case: Ads.
Ads Everywhere
Ads in Samsung phones never really bothered me, at least not until the past few months. It started with the Galaxy Z Flip. A tweet from Todd Haselton of CNBC, embedded below, is what really caught my eye. Samsung had put an ad from DirectTV in the stock dialer app. This is really something I never would have expected from any smartphone company, let alone Samsung.
Samsung has ads in the phone app — the app you open to place calls — on its $1,380 Galaxy Z Flip. 🙄 pic.twitter.com/tuWr2PxSYh
It showed up in the “Places” tab in the dialer app, which is in partnership with Yelp and lets you search for different businesses directly from the dialer app so you don’t need to Google somewhere to find the address or phone number. I looked into it, to see if this was maybe a mistake on Yelp’s part, accidentally displaying an ad where it shouldn’t have, but nope. The ad was placed by Samsung, in an area where it could blend in so they could make money.
Similar ads exist throughout a bunch of Samsung apps. Samsung Music has ads that look like another track in your library. Samsung Health and Samsung Pay have banners for promotional ads. The stock weather app has ads that look like they could be news. There is also more often very blatant advertising in most of these apps as well.
Samsung Music will give you a popup ad for Sirius XM, even though Spotify is built into the Samsung Music app. You can hide the SiriusXM popup, but only for 7 days at a time. A week later, it will be right back there waiting for you. Samsung will also give you push notification ads for new products from Bixby, Samsung Pay, and Samsung Push Service.
Got this on my Note9. Not sure if it’s because my Note9 thinks it’s a Note10 or if it’s an ad. pic.twitter.com/CeD2KfmidE
To really understand Samsung’s absurd and terrible advertising on its smartphones, you have to understand why big companies advertise. Google advertises because its “free services” still cost money to provide. The ads they serve you in Google services help cover the cost of that 15GB of storage, Google Voice phone number, unlimited Google Photos storage, and whatnot. That’s all to say there is a reason for it, you are getting something in return for those ads.
Websites and YouTube channels serve ads because the content they are providing to you for free is not free for them to make. They need to be compensated for what they are providing to you for free. Again, you are getting something for free, and serving you an ad acts as a form of payment. There was no purchase of a product, hardware or software, for you to have access to their content and services.
Even Samsung’s top-tier foldables come packed with ads.
Where it differs with Samsung is you are paying — for their hardware. My $1,980 Galaxy Fold is getting ads while using the phone as anyone normally would. While Samsung doesn’t tell us the profit margins on their products, it would not strain anybody’s imagination to suggest that these margins should be able to cover the cost of the services, tenfold. I could maybe understand having ads on the sub-$300 phones where margins are likely much lower, but I think we can all agree that a phone which costs anywhere near $1,000 (or in my case, far more) should not be riddled with advertisements. Margins should be high enough to cover these services, and if they don’t, Samsung is running a bad business.
These ads are showing up on my $1,980 Galaxy Fold, $1,380 Z Flip, $1,400 S20 Ultra, $1,200 S20+, $1,100 Note 10+, $1,000 S10+, and $750 S10e along with the $100 A10e. I can understand it on a $100 phone, but it is inexcusable to have them on a $750 phone, let alone a $1980 phone.
Every other major phone manufacturer provides basically the same services without requiring ads in their stock apps to subsidize them. OnePlus, OPPO, Huawei, and LG all have stock weather apps, payment apps, phone apps, and even health apps that don’t show ads. Sure, some of these OEMs include pre-installed bloatware, like Facebook, Spotify, and Netflix, but these can generally be disabled or uninstalled. Samsung’s ads can not (at least not fully).
When you consider that Samsung not only sells among the most expensive smartphones money can buy, but that it’s blatantly using them as an ad revenue platform, you’re left with one obvious conclusion: Samsung is getting greedy. Samsung is just being greedy. They hope most Samsung customers aren’t going to switch to other phones and will just ignore and deal with the ads. While that’s a very greedy and honestly just bad tactic, it was largely working until they started pushing it with more ads in more apps.
You can’t disable them
If you’re a Samsung user who’s read through all of this, you might be wondering “how do I shut off the ads?” The answer is, unfortunately, you (mostly) can’t.
You can disable Samsung Push Services, which is sometimes used to feed you notifications from Samsung apps. So disabling Push Services means no more push notification ads, but also no more push notifications at all in some Samsung apps.
Designers often rely on their smartphones for snapping a quick photo of something that inspires them, but Pantone has found a way to turn their smartphone into a genuine design tool. As part of a new online service, it’s created a small card that can be used to accurately sample real world colors by simply holding the card against an object and taking a photo.
[…]
There are existing solutions to this problem. Even Pantone itself sells handheld devices that use highly-calibrated sensors and controlled lighting to sample a real-life color when placed directly on an object. After sampling, the device lets you know how to recreate it in your design software. The problem is they can set you back well north of $700 if the design work you’re doing is especially color critical and accuracy is paramount.
At $15, the Pantone Color Match Card is a much cheaper solution, and it’s one that can be carried in your wallet. When you find a color you want to sample in the real world, you place the card atop it, with the hole in the middle revealing that color, and then take a photo using the Pantone Connect app available for iOS and Android devices.
The app knows the precise color measurements of all the colored squares printed on the rest of the card, which it uses as a reference to accurately calibrate and measure the color you’re sampling. It then attempts to closely match the selection to a shade indexed in the Pantone color archive. The results can be shared to design apps like Adobe Photoshop and Adobe Illustrator using Pantone’s other software tools, and while you can use the app and the Color Match Card with a free Pantone Connect account, a paid account is needed for some of the more advanced interoperability functionality.
How many government demands for user data has Zoom received? We won’t know until “later this year,” an updated Zoom blog post now says.
The video conferencing giant previously said it would release the number of government demands it has received by June 30. But the company said it’s missed that target and has given no firm new date for releasing the figures.
It comes amid heightened scrutiny of the service after a number of security issues and privacy concerns came to light following a massive spike in its user base, thanks to millions working from home because of the coronavirus pandemic.
In a blog post today reflecting on the company’s turnaround efforts, chief executive Eric Yuan said the company has “made significant progress defining the framework and approach for a transparency report that details information related to requests Zoom receives for data, records or content.”
“We look forward to providing the fiscal [second quarter] data in our first report later this year,” he said.
Transparency reports offer rare insights into the number of demands or requests a company gets from the government for user data. These reports are not mandatory, but are important to understand the scale and scope of government surveillance.
Zoom said last month it would launch its first transparency report after the company admitted it briefly suspended the Zoom accounts of two U.S.-based accounts and one Hong Kong activist at the request of the Chinese government. The users, who were not based in China, held a Zoom call commemorating the anniversary of the Tiananmen Square massacre, an event that’s cloaked in secrecy and censorship in mainland China.
Twenty consumer and citizen rights groups have published an open letter [PDF] urging regulators to pay closer attention to Google parent Alphabet’s planned acquisition of Fitbit.
The letter describes the pending purchase as a “game-changer” that will test regulators’ resolve to analyse how the vast quantities of health and location data slurped by Google would affect broader market competition.
“Google could exploit Fitbit’s exceptionally valuable health and location datasets, and data collection capabilities, to strengthen its already dominant position in digital markets such as online advertising,” the group warned.
Signatories to the letter include US-based Color of Change, Center for Digital Democracy and the Omidyar Network, the Australian Privacy Foundation, and BEUC – the European Consumer Organisation.
Google confirmed its intent to acquire Fitbit for $2.1bn in November. The deal is still pending, subject to regulator approval. Google has sought the green light from the European Commission, which is expected to publish its decision on 20 July.
The EU’s executive branch can either approve the buy (with or without additional conditions) or opt to start a four-month investigation.
The US Department of Justice has also started its own investigation, requesting documents from both parties. If the deal is stopped, Google will be forced to pay a $250m termination fee to Fitbit.
Separately, the Australian Competition and Consumer Choice Commission (ACCC) has voiced concerns that the Fitbit-Google deal could have a distorting effect on the advertising market.
“Buying Fitbit will allow Google to build an even more comprehensive set of user data, further cementing its position and raising barriers to entry for potential rivals,” said ACCC chairman Rod Sims last month.
“User data available to Google has made it so valuable to advertisers that it faces only limited competition.”
The Register has asked Google and Fitbit for comment. ®
Updated at 14:06 UTC 02/07/20 to add
A Google spokesperson told The Reg: “Throughout this process we have been clear about our commitment not to use Fitbit health and wellness data for Google ads and our responsibility to provide people with choice and control with their data.
“Similar to our other products, with wearables, we will be transparent about the data we collect and why. And we do not sell personal information to anyone.”
This latest device succeeds the previous Librem 13 laptop, which ran for four generations, and includes a slightly bigger display, a hexa-core Ice Lake Intel Core i7 processor, gigabit Ethernet, and USB-C. As the name implies, the Librem 14 packs a 14-inch, 1920×1080 IPS display. Purism said this comes without increasing the laptop’s dimensions thanks to smaller bezels. You can find the full specs here.
Crucially, it is loaded with the usual privacy features found in Purism’s kit such as hardware kill switches that disconnect the microphone and webcam from the laptop’s circuitry. It also comes with the firm’s PureBoot tech, which includes Purism’s in-house CoreBoot BIOS replacement, and a mostly excised Intel Management Engine (IME).
The IME is a hidden coprocessor included in most of Chipzilla’s chipsets since 2008. It allows system administrators to remotely manage devices using out-of-band communications. But it’s also controversial in the security community since it’s somewhat of a black box.
There is little by way of public documentation. Intel hasn’t released the source code. And, to add insult to injury, it’s also proven vulnerable to exploitation in the past.
The company said that it continued sharing user data with approximately 5,000 developers even after their application’s access expired.
The incident is related to a security control that Facebook added to its systems following the Cambridge Analytica scandal of early 2018.
Responding to criticism that it allowed app developers too much access to user information, Facebook added at the time a new mechanism to its API that prevented apps from accessing a user’s data if the user did not use the app for more than 90 days.
However, Facebook said that it recently discovered that in some instances, this safety mechanism failed to activate and allowed some apps to continue accessing user information even past the 90-day cutoff date.
[…]
“From the last several months of data we have available, we currently estimate this issue enabled approximately 5,000 developers to continue receiving [user] information,” Papamiltiadis said.
The company didn’t clarify how many users were impacted, and had their data made available to app developers even after they stopped using the app.
If I told you that my entire computer screen just got taken over by a new app that I’d never installed or asked for — it just magically appeared on my desktop, my taskbar, and preempted my next website launch — you’d probably tell me to run a virus scanner and stay away from shady websites, no?
But the insanely intrusive app I’m talking about isn’t a piece of ransomware. It’s Microsoft’s new Chromium Edge browser, which the company is now force-feeding users via an automatic update to Windows.
Seriously, when I restarted my Windows 10 desktop this week, an app I’d never asked for:
Immediately launched itself
Tried to convince me to migrate away from Chrome, giving me no discernible way to click away or say no
Pinned itself to my desktop and taskbar
Ignored my previous browser preference by asking me — the next time I launched a website — whether I was sure I wanted to use Chrome instead of Microsoft’s oh-so-humble recommendation.
A Windows 10 update forces a full screen @MicrosoftEdge window, which cannot be closed from the taskbar, or CTRL W, or even ALT F4. You must press “get started,” then the X, and even then it pops up a welcome screen. And pins itself to the taskbar. pic.twitter.com/mEhEbqpIc7
Did I mention that, as of this update, you can’t uninstall Edge anymore?
It all immediately made me think: what would the antitrust enforcers of the ‘90s, who punished Microsoft for bundling Internet Explorer with Windows, think about this modern abuse of Microsoft’s platform?
*wakes up and discovers they not only decided to install Edge on my computer without my consent but also pinned it to my taskbar* …no. NO
“We care about your privacy” Microsoft Edge says as it quietly installs on my computer, opens up in the morning, and once more reminds me that Windows 7 sucks and plz update to the other O/S.
But mostly, I’m surprised Microsoft would shoot itself in the foot by stooping so low, using tactics I’ve only ever seen from purveyors of adware, spyware, and ransomware. I installed this copy of Windows with a disk I purchased, by the way. Maybe I’m old-fashioned, but I like to think I still own my desktop and get to decide what I put there.
That’s especially true of owners of Windows 7 and Windows 8, I imagine, who are also receiving unwanted gift copies of the new Edge right now:
If windows 7 isn’t supported then why did my Work machine automatically install Microsoft EDGE last night 😐
— DJ_Uchuu – Silicon Dreams Comin’ 3rd July (@DjUchuu) June 30, 2020
On Sunday morning, local time in New Zealand, Rocket Lab launched its 13th mission. The booster’s first stage performed normally, but just as the second stage neared an altitude of 200km, something went wrong and the vehicle was lost.
In the immediate aftermath of the failure, the company did not provide any additional information about the problem that occurred with the second stage.
“We lost the flight late into the mission,” said Peter Beck, the company’s founder and chief executive, on Twitter. “I am incredibly sorry that we failed to deliver our customers satellites today. Rest assured we will find the issue, correct it and be back on the pad soon.”
The mission, dubbed “Pics Or It Didn’t Happen,” carried 5 SuperDove satellites for the imaging company Planet, as well as commercial payloads both for Canon Electronics and In-Space Missions.
“The In-Space team is absolutely gutted by this news,” the company said after the loss. Its Faraday-1 spacecraft hosted multiple experiments within a 6U CubeSat. “Two years of hard work from an incredibly committed group of brilliant engineers up in smoke. It really was a very cool little spacecraft.”
Before this weekend’s failure, Rocket Lab had enjoyed an excellent run of success. The company’s first test flight, in May 2017, was lost at an altitude of 224km due to a ground software issue. But beginning with its next flight, in January, 2018, through June, 2020, the company had rattled off a string of 11 successful missions and emerged as a major player in the small satellite launch industry. It has built two additional launch pads, one in New Zealand and another in Virginia, U.S., and taken steps toward reusing its first stage booster.
It seems likely that Rocket Lab will make good on Beck’s promise to address this failure and return to flight soon. His was the first commercial company in a new generation of small satellite rocket developers to reach orbit, and even now remains the only one to do so. Other competitors, including Virgin Orbit, Astra, and Firefly may reach orbit later this year. But Rocket Lab has plenty of experience to draw upon as it works to identify the underlying problem with its second stage, and fix it. There can be little doubt they will.
Researchers from the University of Warwick, Imperial College London, EPFL (Lausanne) and Sciteb Ltd have found a mathematical means of helping regulators and business manage and police Artificial Intelligence systems’ biases towards making unethical, and potentially very costly and damaging commercial choices—an ethical eye on AI.
Artificial intelligence (AI) is increasingly deployed in commercial situations. Consider for example using AI to set prices of insurance products to be sold to a particular customer. There are legitimate reasons for setting different prices for different people, but it may also be profitable to ‘game’ their psychology or willingness to shop around.
The AI has a vast number of potential strategies to choose from, but some are unethical and will incur not just moral cost but a significant potential economic penalty as stakeholders will apply some penalty if they find that such a strategy has been used—regulators may levy significant fines of billions of Dollars, Pounds or Euros and customers may boycott you—or both.
So in an environment in which decisions are increasingly made without human intervention, there is therefore a very strong incentive to know under what circumstances AI systems might adopt an unethical strategy and reduce that risk or eliminate entirely if possible.
Mathematicians and statisticians from University of Warwick, Imperial, EPFL and Sciteb Ltd have come together to help business and regulators creating a new “Unethical Optimization Principle” and provide a simple formula to estimate its impact. They have laid out the full details in a paper bearing the name “An unethical optimization principle”, published in Royal Society Open Science on Wednesday 1st July 2020.
The four authors of the paper are Nicholas Beale of Sciteb Ltd; Heather Battey of the Department of Mathematics, Imperial College London; Anthony C. Davison of the Institute of Mathematics, Ecole Polytechnique Fédérale de Lausanne; and Professor Robert MacKay of the Mathematics Institute of the University of Warwick.
Professor Robert MacKay of the Mathematics Institute of the University of Warwick said:
“Our suggested ‘Unethical Optimization Principle’ can be used to help regulators, compliance staff and others to find problematic strategies that might be hidden in a large strategy space. Optimisation can be expected to choose disproportionately many unethical strategies, inspection of which should show where problems are likely to arise and thus suggest how the AI search algorithm should be modified to avoid them in future.
“The Principle also suggests that it may be necessary to re-think the way AI operates in very large strategy spaces, so that unethical outcomes are explicitly rejected in the optimization/learning process.”
Last Friday, it was reported that Canadian smart glasses startup North was on the verge of being snapped up by Alphabet, Google’s parent company. Today, it’s official.
North announced the acquisition on both Twitter and in an official blog. Details regarding the terms of the sale were scant, though a Globe and Mail scoop from Friday put the number at around $180 million. North’s remaining staff will, however, be staying in Kitchener-Waterloo, Canada and joining a Google team also based there.
“We couldn’t be more thrilled to join Google, and to take an exciting next step towards the future we’ve been focused on for the past eight years,” wrote North co-founders Stephen Lake, Matthew Bailey, and Aaron Grant in the blog.
[…]
Well, it looks like with the acquisition, we’ll never know if a Focals 2.0 would’ve fixed the problems of the original. North’s blog says the company will not only be winding down Focals 1.0, but that the Focals 2.0 will not ship. At the end of the blog, North provides an email for refund requests, and notes that customer support will continue through the end of 2020. And, if Twitter is any indication, refund emails to existing North customers have already begun hitting inboxes.
Note, this article claims that Google was the first company with smart glasses, but I’m pretty sure that Recon Instruments would disagree – another company that was bought up.
I talked about this problem during DORS/CLUC in 2019
The internet’s domain names have become potentially trademarkable following a decision by the US Supreme Court today that Booking.com can in fact be registered with America’s Patent and Trademark Office (PTO) – against officials’ objections.
The near-unanimous decision [PDF] – Justice Stephen Breyer was the sole rebel – went against the PTO’s legal arguments that adding “.com” to a generic term was like adding “company” to a word and so “conveys no additional meaning that would distinguish [one provider’s] services from those of other providers.”
The Supreme Court disagreed; at some length. It agreed with both the district court and the appeals court that “consumers do not in fact perceive the term ‘Booking.com’ that way.” It cited as a key piece of evidence a survey that showed 75 per cent of respondents thought ‘Booking.com’ was a brand name, whereas just 24 per cent believed it was a generic name.
It didn’t help that the PTO hasn’t followed its own argument in the past, with the court noting trademark registration #3,601,346 for Art.com and #2,580,467 for Dating.com. If the decision went against Booking.com, the Supreme Court reasoned, then existing approved trademarks would “be at risk of cancellation.” But it was also scathing in its assessment that “we discern no support for the PTO’s current view in trademark law or policy.”
The same survey that showed 75 per cent of people felt Booking.com was a brand however also revealed that only 33 per cent felt “Washingmachine.com” was a brand whereas 61 per cent though it was generic. And that subjective measurement is likely to prove to be a major headache for the PTO in deciding on what presumably will now be a rush of .com trademark applications.