Designers often rely on their smartphones for snapping a quick photo of something that inspires them, but Pantone has found a way to turn their smartphone into a genuine design tool. As part of a new online service, it’s created a small card that can be used to accurately sample real world colors by simply holding the card against an object and taking a photo.
[…]
There are existing solutions to this problem. Even Pantone itself sells handheld devices that use highly-calibrated sensors and controlled lighting to sample a real-life color when placed directly on an object. After sampling, the device lets you know how to recreate it in your design software. The problem is they can set you back well north of $700 if the design work you’re doing is especially color critical and accuracy is paramount.
At $15, the Pantone Color Match Card is a much cheaper solution, and it’s one that can be carried in your wallet. When you find a color you want to sample in the real world, you place the card atop it, with the hole in the middle revealing that color, and then take a photo using the Pantone Connect app available for iOS and Android devices.
The app knows the precise color measurements of all the colored squares printed on the rest of the card, which it uses as a reference to accurately calibrate and measure the color you’re sampling. It then attempts to closely match the selection to a shade indexed in the Pantone color archive. The results can be shared to design apps like Adobe Photoshop and Adobe Illustrator using Pantone’s other software tools, and while you can use the app and the Color Match Card with a free Pantone Connect account, a paid account is needed for some of the more advanced interoperability functionality.
How many government demands for user data has Zoom received? We won’t know until “later this year,” an updated Zoom blog post now says.
The video conferencing giant previously said it would release the number of government demands it has received by June 30. But the company said it’s missed that target and has given no firm new date for releasing the figures.
It comes amid heightened scrutiny of the service after a number of security issues and privacy concerns came to light following a massive spike in its user base, thanks to millions working from home because of the coronavirus pandemic.
In a blog post today reflecting on the company’s turnaround efforts, chief executive Eric Yuan said the company has “made significant progress defining the framework and approach for a transparency report that details information related to requests Zoom receives for data, records or content.”
“We look forward to providing the fiscal [second quarter] data in our first report later this year,” he said.
Transparency reports offer rare insights into the number of demands or requests a company gets from the government for user data. These reports are not mandatory, but are important to understand the scale and scope of government surveillance.
Zoom said last month it would launch its first transparency report after the company admitted it briefly suspended the Zoom accounts of two U.S.-based accounts and one Hong Kong activist at the request of the Chinese government. The users, who were not based in China, held a Zoom call commemorating the anniversary of the Tiananmen Square massacre, an event that’s cloaked in secrecy and censorship in mainland China.
Twenty consumer and citizen rights groups have published an open letter [PDF] urging regulators to pay closer attention to Google parent Alphabet’s planned acquisition of Fitbit.
The letter describes the pending purchase as a “game-changer” that will test regulators’ resolve to analyse how the vast quantities of health and location data slurped by Google would affect broader market competition.
“Google could exploit Fitbit’s exceptionally valuable health and location datasets, and data collection capabilities, to strengthen its already dominant position in digital markets such as online advertising,” the group warned.
Signatories to the letter include US-based Color of Change, Center for Digital Democracy and the Omidyar Network, the Australian Privacy Foundation, and BEUC – the European Consumer Organisation.
Google confirmed its intent to acquire Fitbit for $2.1bn in November. The deal is still pending, subject to regulator approval. Google has sought the green light from the European Commission, which is expected to publish its decision on 20 July.
The EU’s executive branch can either approve the buy (with or without additional conditions) or opt to start a four-month investigation.
The US Department of Justice has also started its own investigation, requesting documents from both parties. If the deal is stopped, Google will be forced to pay a $250m termination fee to Fitbit.
Separately, the Australian Competition and Consumer Choice Commission (ACCC) has voiced concerns that the Fitbit-Google deal could have a distorting effect on the advertising market.
“Buying Fitbit will allow Google to build an even more comprehensive set of user data, further cementing its position and raising barriers to entry for potential rivals,” said ACCC chairman Rod Sims last month.
“User data available to Google has made it so valuable to advertisers that it faces only limited competition.”
The Register has asked Google and Fitbit for comment. ®
Updated at 14:06 UTC 02/07/20 to add
A Google spokesperson told The Reg: “Throughout this process we have been clear about our commitment not to use Fitbit health and wellness data for Google ads and our responsibility to provide people with choice and control with their data.
“Similar to our other products, with wearables, we will be transparent about the data we collect and why. And we do not sell personal information to anyone.”
This latest device succeeds the previous Librem 13 laptop, which ran for four generations, and includes a slightly bigger display, a hexa-core Ice Lake Intel Core i7 processor, gigabit Ethernet, and USB-C. As the name implies, the Librem 14 packs a 14-inch, 1920×1080 IPS display. Purism said this comes without increasing the laptop’s dimensions thanks to smaller bezels. You can find the full specs here.
Crucially, it is loaded with the usual privacy features found in Purism’s kit such as hardware kill switches that disconnect the microphone and webcam from the laptop’s circuitry. It also comes with the firm’s PureBoot tech, which includes Purism’s in-house CoreBoot BIOS replacement, and a mostly excised Intel Management Engine (IME).
The IME is a hidden coprocessor included in most of Chipzilla’s chipsets since 2008. It allows system administrators to remotely manage devices using out-of-band communications. But it’s also controversial in the security community since it’s somewhat of a black box.
There is little by way of public documentation. Intel hasn’t released the source code. And, to add insult to injury, it’s also proven vulnerable to exploitation in the past.
The company said that it continued sharing user data with approximately 5,000 developers even after their application’s access expired.
The incident is related to a security control that Facebook added to its systems following the Cambridge Analytica scandal of early 2018.
Responding to criticism that it allowed app developers too much access to user information, Facebook added at the time a new mechanism to its API that prevented apps from accessing a user’s data if the user did not use the app for more than 90 days.
However, Facebook said that it recently discovered that in some instances, this safety mechanism failed to activate and allowed some apps to continue accessing user information even past the 90-day cutoff date.
[…]
“From the last several months of data we have available, we currently estimate this issue enabled approximately 5,000 developers to continue receiving [user] information,” Papamiltiadis said.
The company didn’t clarify how many users were impacted, and had their data made available to app developers even after they stopped using the app.
If I told you that my entire computer screen just got taken over by a new app that I’d never installed or asked for — it just magically appeared on my desktop, my taskbar, and preempted my next website launch — you’d probably tell me to run a virus scanner and stay away from shady websites, no?
But the insanely intrusive app I’m talking about isn’t a piece of ransomware. It’s Microsoft’s new Chromium Edge browser, which the company is now force-feeding users via an automatic update to Windows.
Seriously, when I restarted my Windows 10 desktop this week, an app I’d never asked for:
Immediately launched itself
Tried to convince me to migrate away from Chrome, giving me no discernible way to click away or say no
Pinned itself to my desktop and taskbar
Ignored my previous browser preference by asking me — the next time I launched a website — whether I was sure I wanted to use Chrome instead of Microsoft’s oh-so-humble recommendation.
A Windows 10 update forces a full screen @MicrosoftEdge window, which cannot be closed from the taskbar, or CTRL W, or even ALT F4. You must press “get started,” then the X, and even then it pops up a welcome screen. And pins itself to the taskbar. pic.twitter.com/mEhEbqpIc7
Did I mention that, as of this update, you can’t uninstall Edge anymore?
It all immediately made me think: what would the antitrust enforcers of the ‘90s, who punished Microsoft for bundling Internet Explorer with Windows, think about this modern abuse of Microsoft’s platform?
*wakes up and discovers they not only decided to install Edge on my computer without my consent but also pinned it to my taskbar* …no. NO
“We care about your privacy” Microsoft Edge says as it quietly installs on my computer, opens up in the morning, and once more reminds me that Windows 7 sucks and plz update to the other O/S.
But mostly, I’m surprised Microsoft would shoot itself in the foot by stooping so low, using tactics I’ve only ever seen from purveyors of adware, spyware, and ransomware. I installed this copy of Windows with a disk I purchased, by the way. Maybe I’m old-fashioned, but I like to think I still own my desktop and get to decide what I put there.
That’s especially true of owners of Windows 7 and Windows 8, I imagine, who are also receiving unwanted gift copies of the new Edge right now:
If windows 7 isn’t supported then why did my Work machine automatically install Microsoft EDGE last night 😐
— DJ_Uchuu – Silicon Dreams Comin’ 3rd July (@DjUchuu) June 30, 2020
On Sunday morning, local time in New Zealand, Rocket Lab launched its 13th mission. The booster’s first stage performed normally, but just as the second stage neared an altitude of 200km, something went wrong and the vehicle was lost.
In the immediate aftermath of the failure, the company did not provide any additional information about the problem that occurred with the second stage.
“We lost the flight late into the mission,” said Peter Beck, the company’s founder and chief executive, on Twitter. “I am incredibly sorry that we failed to deliver our customers satellites today. Rest assured we will find the issue, correct it and be back on the pad soon.”
The mission, dubbed “Pics Or It Didn’t Happen,” carried 5 SuperDove satellites for the imaging company Planet, as well as commercial payloads both for Canon Electronics and In-Space Missions.
“The In-Space team is absolutely gutted by this news,” the company said after the loss. Its Faraday-1 spacecraft hosted multiple experiments within a 6U CubeSat. “Two years of hard work from an incredibly committed group of brilliant engineers up in smoke. It really was a very cool little spacecraft.”
Before this weekend’s failure, Rocket Lab had enjoyed an excellent run of success. The company’s first test flight, in May 2017, was lost at an altitude of 224km due to a ground software issue. But beginning with its next flight, in January, 2018, through June, 2020, the company had rattled off a string of 11 successful missions and emerged as a major player in the small satellite launch industry. It has built two additional launch pads, one in New Zealand and another in Virginia, U.S., and taken steps toward reusing its first stage booster.
It seems likely that Rocket Lab will make good on Beck’s promise to address this failure and return to flight soon. His was the first commercial company in a new generation of small satellite rocket developers to reach orbit, and even now remains the only one to do so. Other competitors, including Virgin Orbit, Astra, and Firefly may reach orbit later this year. But Rocket Lab has plenty of experience to draw upon as it works to identify the underlying problem with its second stage, and fix it. There can be little doubt they will.
Researchers from the University of Warwick, Imperial College London, EPFL (Lausanne) and Sciteb Ltd have found a mathematical means of helping regulators and business manage and police Artificial Intelligence systems’ biases towards making unethical, and potentially very costly and damaging commercial choices—an ethical eye on AI.
Artificial intelligence (AI) is increasingly deployed in commercial situations. Consider for example using AI to set prices of insurance products to be sold to a particular customer. There are legitimate reasons for setting different prices for different people, but it may also be profitable to ‘game’ their psychology or willingness to shop around.
The AI has a vast number of potential strategies to choose from, but some are unethical and will incur not just moral cost but a significant potential economic penalty as stakeholders will apply some penalty if they find that such a strategy has been used—regulators may levy significant fines of billions of Dollars, Pounds or Euros and customers may boycott you—or both.
So in an environment in which decisions are increasingly made without human intervention, there is therefore a very strong incentive to know under what circumstances AI systems might adopt an unethical strategy and reduce that risk or eliminate entirely if possible.
Mathematicians and statisticians from University of Warwick, Imperial, EPFL and Sciteb Ltd have come together to help business and regulators creating a new “Unethical Optimization Principle” and provide a simple formula to estimate its impact. They have laid out the full details in a paper bearing the name “An unethical optimization principle”, published in Royal Society Open Science on Wednesday 1st July 2020.
The four authors of the paper are Nicholas Beale of Sciteb Ltd; Heather Battey of the Department of Mathematics, Imperial College London; Anthony C. Davison of the Institute of Mathematics, Ecole Polytechnique Fédérale de Lausanne; and Professor Robert MacKay of the Mathematics Institute of the University of Warwick.
Professor Robert MacKay of the Mathematics Institute of the University of Warwick said:
“Our suggested ‘Unethical Optimization Principle’ can be used to help regulators, compliance staff and others to find problematic strategies that might be hidden in a large strategy space. Optimisation can be expected to choose disproportionately many unethical strategies, inspection of which should show where problems are likely to arise and thus suggest how the AI search algorithm should be modified to avoid them in future.
“The Principle also suggests that it may be necessary to re-think the way AI operates in very large strategy spaces, so that unethical outcomes are explicitly rejected in the optimization/learning process.”
Last Friday, it was reported that Canadian smart glasses startup North was on the verge of being snapped up by Alphabet, Google’s parent company. Today, it’s official.
North announced the acquisition on both Twitter and in an official blog. Details regarding the terms of the sale were scant, though a Globe and Mail scoop from Friday put the number at around $180 million. North’s remaining staff will, however, be staying in Kitchener-Waterloo, Canada and joining a Google team also based there.
“We couldn’t be more thrilled to join Google, and to take an exciting next step towards the future we’ve been focused on for the past eight years,” wrote North co-founders Stephen Lake, Matthew Bailey, and Aaron Grant in the blog.
[…]
Well, it looks like with the acquisition, we’ll never know if a Focals 2.0 would’ve fixed the problems of the original. North’s blog says the company will not only be winding down Focals 1.0, but that the Focals 2.0 will not ship. At the end of the blog, North provides an email for refund requests, and notes that customer support will continue through the end of 2020. And, if Twitter is any indication, refund emails to existing North customers have already begun hitting inboxes.
Note, this article claims that Google was the first company with smart glasses, but I’m pretty sure that Recon Instruments would disagree – another company that was bought up.
I talked about this problem during DORS/CLUC in 2019
The internet’s domain names have become potentially trademarkable following a decision by the US Supreme Court today that Booking.com can in fact be registered with America’s Patent and Trademark Office (PTO) – against officials’ objections.
The near-unanimous decision [PDF] – Justice Stephen Breyer was the sole rebel – went against the PTO’s legal arguments that adding “.com” to a generic term was like adding “company” to a word and so “conveys no additional meaning that would distinguish [one provider’s] services from those of other providers.”
The Supreme Court disagreed; at some length. It agreed with both the district court and the appeals court that “consumers do not in fact perceive the term ‘Booking.com’ that way.” It cited as a key piece of evidence a survey that showed 75 per cent of respondents thought ‘Booking.com’ was a brand name, whereas just 24 per cent believed it was a generic name.
It didn’t help that the PTO hasn’t followed its own argument in the past, with the court noting trademark registration #3,601,346 for Art.com and #2,580,467 for Dating.com. If the decision went against Booking.com, the Supreme Court reasoned, then existing approved trademarks would “be at risk of cancellation.” But it was also scathing in its assessment that “we discern no support for the PTO’s current view in trademark law or policy.”
The same survey that showed 75 per cent of people felt Booking.com was a brand however also revealed that only 33 per cent felt “Washingmachine.com” was a brand whereas 61 per cent though it was generic. And that subjective measurement is likely to prove to be a major headache for the PTO in deciding on what presumably will now be a rush of .com trademark applications.
Folks running Bitdefender’s Total Security 2020 package should check they have the latest version installed following the disclosure of a remote code execution bug.
Wladimir Palant, cofounder of Adblock-Plus-maker Eyeo, tipped off Bitdefender about the flaw, CVE-2020-8102, after discovering what he called “seemingly small weaknesses” that could be exploited by a hostile website to take control of a computer running Bitdefender’s antivirus package. The bug, privately reported in April, was patched in May.
[…]
It’s important to note that Bitdefender said the bug was within its Chromium-based “secure browser” SafePay, which is supposed to protect online payments from hackers and is part of its Total Security 2020 suite. Meanwhile, Palant said the vulnerability was within a component called Online Protection within that suite, meaning it could be exploited by any website opened in any browser on any computer running Bitdefender’s vulnerable antivirus package.
[…]
When the antivirus suite wanted to flag up suspicious or broken HTTPS certificates, which are sometimes a sign shenanigans may be afoot, Bitdefender’s code generated a custom error page that appeared as though it came from the requested website. It would do this by modifying the server response.
It’s generally preferable that antivirus vendors stay away from encrypted connections as much as possible
There was nothing to stop a web server with a bad certificate from requesting the contents of Bitdefender’s custom error page, though, because as far as your browser is concerned, the error page came from the web server anyway.
Thus, a malicious web server could serve a page with a good certificate, and cause a new window to open with a page from the same domain and server albeit with an invalid certificate. Bitdefender’s code would jump in, and replace the second webpage with a custom error page. The first page with the good certificate could then use XMLHttpRequest to fetch the contents of the error page, which your browser would hand over.
That error page contained the Bitdefender installation’s session tokens, which could be used to send system commands to the security software suite on the user’s PC to execute. Palant’s proof-of-concept exploit worked against a Windows host, allowing a malicious page to install, say, spyware or ransomware on a victim’s computer.
“The URL in the browser’s address bar doesn’t change,” Palant explained. “So as far as the browser is concerned, this error page originated at the web server and there is no reason why other web pages from the same server shouldn’t be able to access it. Whatever security tokens are contained within it, websites can read them out.
the Monarch of Meat announced a campaign that takes advantage of some sloppy sign recognition in the Tesla Autopilot’s Traffic Light and Stop Sign control, specifically in instances where the Tesla confuses a Burger King sign for a stop sign (maybe a “traffic control” sign?) and proceeds to stop the car, leaving the occupants of the car in a great position to consume some Whoppers.
The confusion was first noted by a Tesla Model 3 owner who has confusingly sawed the top off his steering wheel, for some reason, and uploaded a video of the car confusing the Burger King sign for a stop sign.
Burger King’s crack marketing team managed to arrange to use the video in this ad, and built a short promotion around it:
Did you see what I was talking about with that steering wheel? I guess the owner just thought it looked Batmobile-cool, or something? It’s also worth noting that is seems that the car’s map display has been modified, likely to remove any Tesla branding and obscure the actual location:
The promotion, which Burger King is using the #autopilotwhopper hashtag to promote, was only good for June 23rd, when they’d give you a free Whopper if you met the following conditions:
To qualify for the Promotion, guest must share a picture or video on Twitter, Facebook or Twitter with guest’s smart car outside a BK restaurant using #autopilotwhopper and #freewhopper.
Guests who complete step #3 will receive a direct message, within 24 hours of posting the picture/video, with a unique code for a Free Whopper sandwich (“Coupon”). Limit one Coupon per account.
It seems Burger King is using the phrase “smart car” to refer to any car that has some sort of Level 2 semi-autonomous driver’s assistance system that can identify signs, but the use of the “autopilot” in the hashtag and the original video make it clear that Teslas are the targeted cars here.
Comcast has agreed to be the first home broadband internet provider to handle secure DNS-over-HTTPS queries for Firefox browser users in the US, Mozilla has announced.
This means the ISP, which has joined Moz’s Trusted Recursive Resolver (TRR) Program, will perform domain-name-to-IP-address lookups for subscribers using Firefox via encrypted HTTPS channels. That prevents network eavesdroppers from snooping on DNS queries or meddling with them to redirect connections to malicious webpages.
Last year Comcast and other broadband giants were fiercely against such safeguards, though it appears Comcast has had a change of heart – presumably when it figured it could offer DNS-over-HTTPS services as well as its plain-text DNS resolvers.
At some point in the near future, Firefox users subscribed to Comcast will use the ISP’s DNS-over-HTTPS resolvers by default, though they can opt to switch to other secure DNS providers or opt-out completely.
[…]
Incredibly, DNS-over-HTTPS was heralded as a way to prevent, among others, ISPs from snooping on and analyzing their subscribers’ web activities to target them with adverts tailored to their interests, or sell the information as a package to advertisers and industry analysts. And yet, here’s Comcast providing a DNS-over-HTTPS service for Firefox fans, allowing it to inspect and exploit their incoming queries if it so wishes. Talk about a fox guarding the hen house.
ISPs “have access to a stream of a user’s browsing history,” Marshall Erwin, senior director of trust and security at, er, Mozilla, warned in November. “This is particularly concerning in light of the rollback of the broadband privacy rules, which removed guardrails for how ISPs can use your data. The same ISPs are now fighting to prevent the deployment of DNS-over-HTTPS.”
Mozilla today insisted its new best buddy Comcast is going to play nice and follow the DNS privacy program’s rules.
Russia’s space agency Roscosmos has re-entered the space tourism market and this time will offer one person the chance to spacewalk.
The agency on Thursday announced a new deal with US outfit Space Adventures to take two people to the International Space Station atop a Soyuz rocket. One of the tourists, according to Space Adventures’ announcement, “will have an opportunity to conduct a spacewalk outside the space station, becoming the first private citizen in history to experience open space.”
The spacewalking tourist will be accompanied by a professional Russian cosmonaut.
The two companies have previously launched seven space tourists including Ubuntu daddy Mark Shuttleworth in 2002. Your correspondent interviewed him about the experience in 2005 and he was still clearly awed by the power of the Soyuz, weightlessness and the views from above, to the extent that he said a sub-orbital tourist flight with the likes of Virgin Galactic held little appeal.
The trip will see the pair of tourists spend 14 days in the Russian module of the ISS.
As advertisers pull away from Facebook to protest the social networking giant’s hands-off approach to misinformation and hate speech, the company is instituting a number of stronger policies to woo them back.
In a livestreamed segment of the company’s weekly all-hands meeting, CEO Mark Zuckerberg recapped some of the steps Facebook is already taking, and announced new measures to fight voter suppression and misinformation — although they amount to things that other social media platforms like Twitter have already enahatected and enforced in more aggressive ways.
At the heart of the policy changes is an admission that the company will continue to allow politicians and public figures to disseminate hate speech that does, in fact, violate Facebook’s own guidelines — but it will add a label to denote they’re remaining on the platform because of their “newsworthy” nature.
It’s a watered-down version of the more muscular stance that Twitter has taken to limit the ability of its network to amplify hate speech or statements that incite violence.
A handful of times a year, we leave up content that would otherwise violate our policies if the public interest value outweighs the risk of harm. Often, seeing speech from politicians is in the public interest, and in the same way that news outlets will report what a politician says, we think people should generally be able to see it for themselves on our platforms.
We will soon start labeling some of the content we leave up because it is deemed newsworthy, so people can know when this is the case. We’ll allow people to share this content to condemn it, just like we do with other problematic content, because this is an important part of how we discuss what’s acceptable in our society — but we’ll add a prompt to tell people that the content they’re sharing may violate our policies.
The problems with this approach are legion. Ultimately, it’s another example of Facebook’s insistence that with hate speech and other types of rhetoric and propaganda, the onus of responsibility is on the user.
Apple has said it has decided not to implement 16 web APIs in its Safari browser’s WebKit engine in part because they pose a privacy threat. Critics of the iGiant, including competitors like Google, see Apple’s stance as a defense against a competitive threat.
These APIs, developed in recent years to allow web developers to have access to capabilities available to native mobile platform coders, have the potential to be abused for device fingerprinting, a privacy-violating technique for constructing a unique identifier out of readable device characteristics that can be used for tracking individuals across websites and can be correlated to follow people across devices.
“WebKit’s first line of defense against fingerprinting is to not implement web features which increase fingerprintability and offer no safe way to protect the user,” explains the WebKit team’s recently updated post on tracking prevention.
[…]
In a message to The Register, Lukasz Olejnik, an independent researcher and consultant, characterized the decision as a win for privacy, noting that research he co-authored in 2015 and subsequently on the privacy risks of the Battery Status API and other browser fingerprinting threats helped shape Apple’s policy.
Concern about abuse of the Battery Status API, which websites and browser-based apps can use to check the battery level of a visitor’s/user’s mobile device, prompted Mozilla to remove support in October 2016. Around the same time, Apple, which had implemented the API in code but never activated it, decided not ship it.
Google meanwhile shipped the Battery Status API in Chrome 45, which debuted on July 10, 2015. Rather than removing it, the web giant in May committed to modifying it by allowing developers to disable the API with their apps and in third-party components.
Apple, trying to control its market? No!
Google engineers coincidentally are among those expressing frustration with Apple for holding the web platform back.
Apple requires that all web browsers on iOS devices use Safari’s WebKit rendering engine, which has made mobile browsers on iOS something of a monoculture: Though users may choose to run Chrome on iOS, it’s essentially Safari under the hood.
Over the past few years, Apple’s leisurely (or cautious) pace of API deployment in Safari has meant that Progressive Web Apps (PWAs) – installable web apps that run offline – haven’t worked properly on iOS devices.
As a result, web developers, particularly those interested in PWA adoption, have accused Apple of trying to hamstring web apps to protect its financial stake in native iOS apps, for which it gets a 30 per cent share of revenue through its App Store rules. Those same rules are now the subject of an EU antitrust inquiry.
[…]
Or as Ben Thompson, tech analyst for Stratechery, put it in a blog post on Monday, “Making the web less useful makes apps more useful, from which Apple can take its share; similarly, it is notable that Apple is expanding its own app install product even as it is kneecapping the industry’s.”
Asked about whether these competitive concerns have substance, Olejnik acknowledged that some people see Apple’s technical decisions in that light.
“That said, some privacy concerns are legitimate,” he said.
And for what it’s worth, the technical barriers to PWAs have been falling.
Sponge Examples: Energy-Latency Attacks on Neural Networks shows how to find adversarial examples that cause a DNN to burn more energy, take more time, or both. They affect a wide range of DNN applications, from image recognition to natural language processing (NLP). Adversaries might use these examples for all sorts of mischief – from draining mobile phone batteries, though degrading the machine-vision systems on which self-driving cars rely, to jamming cognitive radar.
So far, our most spectacular results are against NLP systems. By feeding them confusing inputs we can slow them down over 100 times. There are already examples in the real world where people pause or stumble when asked hard questions but we now have a dependable method for generating such examples automatically and at scale. We can also neutralize the performance improvements of accelerators for computer vision tasks, and make them operate on their worst case performance.
One implication is that engineers designing real-time systems that use machine learning will have to pay more attention to worst-case behaviour; another is that when custom chips used to accelerate neural network computations use optimisations that increase the gap between worst-case and average-case outcomes, you’d better pay even more attention.
We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training.
Our model, called GPT-2 (a successor to GPT), was trained simply to predict the next word in 40GB of Internet text. Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.
[…]
GPT-2 displays a broad set of capabilities, including the ability to generate conditional synthetic text samples of unprecedented quality, where we prime the model with an input and have it generate a lengthy continuation. In addition, GPT-2 outperforms other language models trained on specific domains (like Wikipedia, news, or books) without needing to use these domain-specific training datasets. On language tasks like question answering, reading comprehension, summarization, and translation, GPT-2 begins to learn these tasks from the raw text, using no task-specific training data. While scores on these downstream tasks are far from state-of-the-art, they suggest that the tasks can benefit from unsupervised techniques, given sufficient (unlabeled) data and compute.
Samples
GPT-2 generates synthetic text samples in response to the model being primed with an arbitrary input. The model is chameleon-like—it adapts to the style and content of the conditioning text. This allows the user to generate realistic and coherent continuations about a topic of their choosing, as seen by the following select samples
In 1969, British physicist Roger Penrose suggested that energy could be generated by lowering an object into the black hole’s ergosphere—the outer layer of the black hole’s event horizon, where an object would have to move faster than the speed of light in order to remain still.
Penrose predicted that the object would acquire a negative energy in this unusual area of space. By dropping the object and splitting it in two so that one half falls into the black hole while the other is recovered, the recoil action would measure a loss of negative energy—effectively, the recovered half would gain energy extracted from the black hole’s rotation. The scale of the engineering challenge the process would require is so great, however, that Penrose suggested only a very advanced, perhaps alien, civilisation would be equal to the task.
Two years later, another physicist named Yakov Zel’dovich suggested the theory could be tested with a more practical, earthbound experiment. He proposed that “twisted” light waves, hitting the surface of a rotating metal cylinder turning at just the right speed, would end up being reflected with additional energy extracted from the cylinder’s rotation thanks to a quirk of the rotational doppler effect.
But Zel’dovich’s idea has remained solely in the realm of theory since 1971 because, for the experiment to work, his proposed metal cylinder would need to rotate at least a billion times a second—another insurmountable challenge for the current limits of human engineering.
Now, researchers from the University of Glasgow’s School of Physics and Astronomy have finally found a way to experimentally demonstrate the effect that Penrose and Zel’dovich proposed by twisting sound instead of light—a much lower frequency source, and thus much more practical to demonstrate in the lab.
[…]
Marion Cromb, a Ph.D. student in the University’s School of Physics and Astronomy, is the paper’s lead author. Marion said: “The linear version of the doppler effect is familiar to most people as the phenomenon that occurs as the pitch of an ambulance siren appears to rise as it approaches the listener but drops as it heads away. It appears to rise because the sound waves are reaching the listener more frequently as the ambulance nears, then less frequently as it passes.
“The rotational doppler effect is similar, but the effect is confined to a circular space. The twisted sound waves change their pitch when measured from the point of view of the rotating surface. If the surface rotates fast enough then the sound frequency can do something very strange—it can go from a positive frequency to a negative one, and in doing so steal some energy from the rotation of the surface.”
As the speed of the spinning disc increases during the researchers’ experiment, the pitch of the sound from the speakers drops until it becomes too low to hear. Then, the pitch rises back up again until it reaches its previous pitch—but louder, with amplitude of up to 30% greater than the original sound coming from the speakers.
Marion added: “What we heard during our experiment was extraordinary. What’s happening is that the frequency of the sound waves is being doppler-shifted to zero as the spin speed increases. When the sound starts back up again, it’s because the waves have been shifted from a positive frequency to a negative frequency. Those negative-frequency waves are capable of taking some of the energy from the spinning foam disc, becoming louder in the process—just as Zel’dovich proposed in 1971.”
Professor Daniele Faccio, also of the University of Glasgow’s School of Physics and Astronomy, is a co-author on the paper. Prof Faccio added: “We’re thrilled to have been able to experimentally verify some extremely odd physics a half-century after the theory was first proposed. It’s strange to think that we’ve been able to confirm a half-century-old theory with cosmic origins here in our lab in the west of Scotland, but we think it will open up a lot of new avenues of scientific exploration. We’re keen to see how we can investigate the effect on different sources such as electromagnetic waves in the near future.”
The research team’s paper, titled “Amplification of waves from a rotating body,” is published in Nature Physics.
A new digital tool built to depixelize photos sounds scary and bad. Another way to remove privacy from the world. But this tool is also being used for a sillier and not terrible purpose: Depixelizng old game characters. The results are…nevermind, this is also a terrible use of this tool.
“Face Depixelizer” is a tool Created by Alex Damian, Sachit Menon, and Denis Malimonov. It does exactly what you expect with a name like that. Users can upload a pixelated photo of a face and the tool spits out what that person might look like based on algorithms and all that stuff. In the wrong hands, this type of tech can be used to do some bad shit and will make it harder to hide in this world from police and other powerful and dangerous groups.
But it can also be used to create monsters out of old game characters. Look what this thing did to Mario, for example.
These might be strange or even a bit monstrous, but things start getting much worse when you feed the tool images that don’t look like people at all. For example, this is what someone got after uploading an image of a Cacodemon from Doom.
Google, Apple, Facebook, Amazon and a host of other tech giants will have to pay billions of dollars in extra tax after the Supreme Court refused to hear an appeal on a stock-option case.
America’s top court said [PDF] on Monday it will not review a decision by the Ninth Circuit of Appeals that stock-based compensation should be considered a US taxable asset.
The case concerns the tax years 2004-2007 and Intel-owned tech company Altera, which provided its employees with the ability to buy company shares at a set price in future – a common practice in the tech industry. But that benefit was not included in an accounting of an Altera subsidiary based in a Cayman Islands tax haven just prior to Intel’s purchase.
The shifting of intangible assets has become a common tax-reducing tactic by large tech companies and saves those companies billions of dollars every year that they would otherwise pay to US tax authorities.
However, the Internal Revenue Service (IRS) insisted that Altera’s stock-option compensation be taxed under US tax rules. Facing a massive tax bill- Altera refused to accept the rule and challenged it in court, arguing that “the amount of money at stake is enormous.”
The company accused the IRS of over-reach and claimed it had not provided sufficient evidence to prove its case. And Altera won with a unanimous decision in tax court.
But the IRS appealed and the Ninth Circuit then found in the IRS’ favor, arguing in its 2-1 decision [PDF] in June 2019 that it was “uncontroversial” that stock options should be treated as accounting costs. It then refused a request for the whole court to rehear the case. So Altera appealed the decision to the Supreme Court.
Big Tech weighs in
Among the companies that urged the Supreme Court to take up the case were Apple, Google and Facebook – all of which now face massive tax bills for having done exactly the same thing.
The tech giants argued that the Ninth Circuit decision threatened to ruin “the hard-won but fragile international consensus on treatment of hundreds of billions of dollars of intercompany payments.” In other words, land them with massive, unexpected tax bills.
Ranged against the tech giants were a clump of law professors who argued that the IRS was right to make stock-option compensation a taxable asset.
It’s hard to know the true impact on those companies but the bills are expected to run to billions of dollars, possibly tens of billions. But in a sign of just how big those companies have become the Supreme Court judgment had no impact on share prices this morning – Wall Street knows quite how much cash these companies are sitting on.
If that news wasn’t bad enough however, there is a bigger tax issue hovering over Big Tech: the so-called digital tax threatened by the European Union, which is also fed up with companies like Google, Apple and Facebook paying almost no tax in their countries because of creative accounting through subsidiaries.
That digital tax became more likely this month after the US walked away from discussions at the Organisation for Economic Co-operation and Development (OECD) that were focused on developing a global tax agreement for digital companies.
With the OECD approach faltering, the EU has already made it clear that it will introduce its own version of a digital tax that is likely to make tech giants pay much more to countries in which they operate. Those new taxes are expected to kick in at the start of 2021.
Hundreds of thousands of potentially sensitive files from police departments across the United States were leaked online last week. The collection, dubbed “BlueLeaks” and made searchable online, stems from a security breach at a Texas web design and hosting company that maintains a number of state law enforcement data-sharing portals.
The collection — nearly 270 gigabytes in total — is the latest release from Distributed Denial of Secrets (DDoSecrets), an alternative to Wikileaks that publishes caches of previously secret data.
A partial screenshot of the BlueLeaks data cache.
In a post on Twitter, DDoSecrets said the BlueLeaks archive indexes “ten years of data from over 200 police departments, fusion centers and other law enforcement training and support resources,” and that “among the hundreds of thousands of documents are police and FBI reports, bulletins, guides and more.”
Fusion centers are state-owned and operated entities that gather and disseminate law enforcement and public safety information between state, local, tribal and territorial, federal and private sector partners.
KrebsOnSecurity obtained an internal June 20 analysis by the National Fusion Center Association (NFCA), which confirmed the validity of the leaked data. The NFCA alert noted that the dates of the files in the leak actually span nearly 24 years — from August 1996 through June 19, 2020 — and that the documents include names, email addresses, phone numbers, PDF documents, images, and a large number of text, video, CSV and ZIP files.
“Additionally, the data dump contains emails and associated attachments,” the alert reads. “Our initial analysis revealed that some of these files contain highly sensitive information such as ACH routing numbers, international bank account numbers (IBANs), and other financial data as well as personally identifiable information (PII) and images of suspects listed in Requests for Information (RFIs) and other law enforcement and government agency reports.”
[…]
22
Jun 20
‘BlueLeaks’ Exposes Files from Hundreds of Police Departments
Hundreds of thousands of potentially sensitive files from police departments across the United States were leaked online last week. The collection, dubbed “BlueLeaks” and made searchable online, stems from a security breach at a Texas web design and hosting company that maintains a number of state law enforcement data-sharing portals.
The collection — nearly 270 gigabytes in total — is the latest release from Distributed Denial of Secrets (DDoSecrets), an alternative to Wikileaks that publishes caches of previously secret data.
A partial screenshot of the BlueLeaks data cache.
In a post on Twitter, DDoSecrets said the BlueLeaks archive indexes “ten years of data from over 200 police departments, fusion centers and other law enforcement training and support resources,” and that “among the hundreds of thousands of documents are police and FBI reports, bulletins, guides and more.”
Fusion centers are state-owned and operated entities that gather and disseminate law enforcement and public safety information between state, local, tribal and territorial, federal and private sector partners.
KrebsOnSecurity obtained an internal June 20 analysis by the National Fusion Center Association (NFCA), which confirmed the validity of the leaked data. The NFCA alert noted that the dates of the files in the leak actually span nearly 24 years — from August 1996 through June 19, 2020 — and that the documents include names, email addresses, phone numbers, PDF documents, images, and a large number of text, video, CSV and ZIP files.
“Additionally, the data dump contains emails and associated attachments,” the alert reads. “Our initial analysis revealed that some of these files contain highly sensitive information such as ACH routing numbers, international bank account numbers (IBANs), and other financial data as well as personally identifiable information (PII) and images of suspects listed in Requests for Information (RFIs) and other law enforcement and government agency reports.”
The NFCA said it appears the data published by BlueLeaks was taken after a security breach at Netsential, a Houston-based web development firm.
“Preliminary analysis of the data contained in this leak suggests that Netsential, a web services company used by multiple fusion centers, law enforcement, and other government agencies across the United States, was the source of the compromise,” the NFCA wrote. “Netsential confirmed that this compromise was likely the result of a threat actor who leveraged a compromised Netsential customer user account and the web platform’s upload feature to introduce malicious content, allowing for the exfiltration of other Netsential customer data.”
Machine learning models built for doing business prior to the COVID-19 pandemic will no longer be valid as economies emerge from lockdowns, presenting companies with new challenges in machine learning and enterprise data management, according to Gartner.
The research group has reported that “the extreme disruption in the aftermath of COVID-19… has invalidated many models that are based on historical data.”
Organisations commonly using machine learning for product recommendation engines or next-best-offer, for example, will have to rethink their approach. They need to broaden their machine learning techniques as there is not enough post-COVID-19 data to retrain supervised machine learning models.
Advanced modelling techniques can help
In any case the ‘new normal’ is still emerging, making the validity of prediction models a challenge, said Rita Sallam, distinguished research vice president at Gartner.
“It’s a lot harder to just say those models based on typical data that happened prior to the COVID-19 outbreak, or even data that happened during the pandemic, will be valid. Essentially what we’re seeing is [a] complete shift in many ways in customer expectations, in their buying patterns. Old processing, products, customer needs and wants, and even business models are being replaced. Organisations have to replace them at a pace that is just unprecedented,” she said.
China sent the last satellite to space on Tuesday to complete its global navigation system that will help wean it off U.S. technology in this area.
The network known as Beidou, which has been in the works for over two decades, is a significant step for China’s space and technology ambitions.
Beidou is a rival to the U.S. government-owned Global Positioning System (GPS), which is widely-used across the world.
Experts previously told CNBC that Beidou will help China’s military stay online in case of a conflict with the U.S. But the launch is also part of Beijing’s push to increase its technological influence globally.
Names, adresses and mobile numbers have been sold for fraud using WhatsApp. Most of these numbers come from callcentres, mainly those selling energy contracts. The fresher a lead is, the more they are worth: betwween 25 cents and 2 euros. The money is usually transferred through mules, who keep a percentage of the proceeds.