A member of the Facebook Wink Users Group discovered that after selling his Nest cam, he was still able to access images from his old camera—except it wasn’t a feed of his property. Instead, he was tapping into the feed of the new owner, via his Wink account. As the original owner, he had connected the Nest Cam to his Wink smart-home hub, and somehow, even after he reset it, the connection continued.
We decided to test this ourselves and found that, as it happened for the person on Facebook, images from our decommissioned Nest Cam Indoor were still viewable via a previously linked Wink hub account—although instead of a video stream, it was a series of still images snapped every several seconds.
Here’s the process we used to confirm it:
Our Nest cam had recently been signed up to Nest Aware, but the subscription was canceled in the past week. That Nest account was also linked to a Wink Hub 2. Per Nest’s instructions, we confirmed that our Aware subscription was not active, after which we removed our Nest cam from our Nest account—this is Nest’s guidance for a “factory reset” of this particular camera.
Nest’s instructions for doing a factory reset on the Nest Cam indicate that there is no factory reset button, a common feature on smart-home devices.
After that, we were unable to access the live stream with either the mobile Nest app or the desktop Nest app, as expected. We also couldn’t access the camera using the Wink app, because the camera was not online. We then created a new Nest account on a new (Android) device that had a new data connection. We followed the steps for adding the Nest Cam Indoor to that new Nest account, and we were able to view a live stream successfully through the Nest mobile app. However, going back to our Wink app, we were also able to view a stream of still images from the Nest cam, despite its being associated with a new Nest account.
In simpler terms: If you buy and set up a used Nest indoor camera that has been paired with a Wink hub, the previous owner may have unfettered access to images from that camera. And we currently don’t know of any cure for this problem.
Even as Homeland Security officials have attempted to downplay the impact of a security intrusion that reached deep into the network of a federal surveillance contractor, secret documents, handbooks, and slides concerning surveillance technology deployed along U.S. borders are being widely and openly shared online.
A terabyte of torrents seeded by Distributed Denial of Secrets (DDOS)—journalists dispersing records that governments and corporations would rather nobody read—are as of writing being downloaded daily. As of this week, that includes more than 400 GB of data stolen by an unknown actor from Perceptics, a discreet contractor based in Knoxville, Tennessee, that works for Customs and Border Protection (CBP) and is, regardless of whatever U.S. officials say, right now the epicenter of a major U.S. government data breach.
The files include powerpoint presentations, manuals, marketing materials, budgets, equipment lists, schematics, passwords, and other documents detailing Perceptics’ work for CBP and other government agencies for nearly a decade. Tens of thousands of surveillance photographs taken of travelers and their vehicles at the U.S. border are among the first tranches of data to be released. Reporters are digging through the dump and already expanding our understanding of the enormous surveillance apparatus that is being erected on our border.
In a statement last week, CBP insisted that none of the image data had been identified online, even as one headline declared, “Here Are Images of Drivers Hacked From a U.S. Border Protection Contractor.”
“The breach covers a huge amount of data which has, until now, been protected by dozens of Non-Disclosure Agreements and the (b)(4) trade-secrets exemption which Perceptics has demanded DHS apply to all Perceptics information,” DDOS team member Emma Best, who often reports for the Freedom of Information site MuckRock, told Gizmodo.
Despite the government’s attempt to downplay the breach, the Perceptics files, she said, “include schematics, plans, and reports for DHS, the DEA, and the Pentagon as well as foreign clients.”
While the files can be viewed online, according to Best, DDOS has experienced nearly a 50 percent spike in traffic from users who’ve opted to download the entire dataset.
“We’re making these files available for public review because they provide an unprecedented and intimate look at the mass surveillance of legal travel, as well as more local surveillance of turnpike and secure facilities,” Best said. “Most importantly they provide a glimpse of how the government and these companies protect our information—or, in some cases, how they fail to.”
Neither CBP nor Perceptics immediately responded to a request for comment.
Millions of PCs made by Dell and other OEMs are vulnerable to a flaw stemming from a component in pre-installed SupportAssist software. The flaw could enable a remote attacker to completely takeover affected devices.
The high-severity vulnerability (CVE-2019-12280) stems from a component in SupportAssist, a proactive monitoring software pre-installed on PCs with automatic failure detection and notifications for Dell devices. That component is made by a company called PC-Doctor, which develops hardware-diagnostic software for various PC and laptop original equipment manufacturers (OEMs).
“According to Dell’s website, SupportAssist is preinstalled on most of Dell devices running Windows, which means that as long as the software is not patched, this vulnerability probably affects many Dell users,” Peleg Hadar, security researcher with SafeBreach Labs – who discovered the breach – said in a Friday analysis.
You open your browser to look at the Web. Do you know who is looking back at you?
Over a recent week of Web surfing, I peered under the hood of Google Chrome and found it brought along a few thousand friends. Shopping, news and even government sites quietly tagged my browser to let ad and data companies ride shotgun while I clicked around the Web.
This was made possible by the Web’s biggest snoop of all: Google. Seen from the inside, its Chrome browser looks a lot like surveillance software.
Lately I’ve been investigating the secret life of my data, running experiments to see what technology really gets up to under the cover of privacy policies that nobody reads. It turns out, having the world’s biggest advertising company make the most popular Web browser was about as smart as letting kids run a candy shop.
It made me decide to ditch Chrome for a new version of nonprofit Mozilla’s Firefox, which has default privacy protections. Switching involved less inconvenience than you might imagine.
My tests of Chrome vs. Firefox unearthed a personal data caper of absurd proportions. In a week of Web surfing on my desktop, I discovered 11,189 requests for tracker “cookies” that Chrome would have ushered right onto my computer but were automatically blocked by Firefox. These little files are the hooks that data firms, including Google itself, use to follow what websites you visit so they can build profiles of your interests, income and personality.
Chrome welcomed trackers even at websites you would think would be private. I watched Aetna and the Federal Student Aid website set cookies for Facebook and Google. They surreptitiously told the data giants every time I pulled up the insurance and loan service’s log-in pages.
And that’s not the half of it.
Look in the upper right corner of your Chrome browser. See a picture or a name in the circle? If so, you’re logged in to the browser, and Google might be tapping into your Web activity to target ads. Don’t recall signing in? I didn’t, either. Chrome recently started doing that automatically when you use Gmail.
Chrome is even sneakier on your phone. If you use Android, Chrome sends Google your location every time you conduct a search. (If you turn off location sharing it still sends your coordinates out, just with less accuracy.)
Firefox isn’t perfect — it still defaults searches to Google and permits some other tracking. But it doesn’t share browsing data with Mozilla, which isn’t in the data-collection business.
At a minimum, Web snooping can be annoying. Cookies are how a pair of pants you look at in one site end up following you around in ads elsewhere. More fundamentally, your Web history — like the color of your underpants — ain’t nobody’s business but your own. Letting anyone collect that data leaves it ripe for abuse by bullies, spies and hackers.
[…]
Choosing a browser is no longer just about speed and convenience — it’s also about data defaults.
It’s true that Google usually obtains consent before gathering data, and offers a lot of knobs you can adjust to opt out of tracking and targeted advertising. But its controls often feel like a shell game that results in us sharing more personal data.
I felt hoodwinked when Google quietly began signing Gmail users into Chrome last fall. Google says the Chrome shift didn’t cause anybody’s browsing history to be “synced” unless they specifically opted in — but I found mine was being sent to Google and don’t recall ever asking for extra surveillance. (You can turn off the Gmail auto-login by searching “Gmail” in Chrome settings and switching off “Allow Chrome sign-in.”)
After the sign-in shift, Johns Hopkins associate professor Matthew Green made waves in the computer science world when he blogged he was done with Chrome. “I lost faith,” he told me. “It only takes a few tiny changes to make it very privacy unfriendly.”
When you use Chrome, signing into Gmail automatically logs in the browser to your Google account. When “sync” is also on, Google receives your browsing history.
There are ways to defang Chrome, which is much more complicated than just using “Incognito Mode.” But it’s much easier to switch to a browser not owned by an advertising company.
Like Green, I’ve chosen Firefox, which works across phones, tablets, PCs and Macs. Apple’s Safari is also a good option on Macs, iPhones and iPads, and the niche Brave browser goes even further in trying to jam the ad-tech industry.
What does switching to Firefox cost you? It’s free, and downloading a different browser is much simpler than changing phones.
[…]
And as a nonprofit, it earns money when people make searches in the browser and click on ads — which means its biggest source of income is Google. Mozilla’s chief executive says the company is exploring new paid privacy services to diversify its income.
Its biggest risk is that Firefox might someday run out of steam in its battle with the Chrome behemoth. Even though it’s the No. 2 desktop browser,with about 10 percent of the market, major sites could decide to drop support, leaving Firefox scrambling.
If you care about privacy, let’s hope for another David and Goliath outcome.
In a grainy black-and-white video shot at the Mayo Clinic in Minnesota, a patient sits in a hospital bed, his head wrapped in a bandage. He’s trying to recall 12 words for a memory test but can only conjure three: whale, pit, zoo. After a pause, he gives up, sinking his head into his hands.
In a second video, he recites all 12 words without hesitation. “No kidding, you got all of them!” a researcher says. This time the patient had help, a prosthetic memory aid inserted into his brain.
Over the past five years, the U.S. Defense Advanced Research Projects Agency (Darpa) has invested US$77 million to develop devices intended to restore the memory-generation capacity of people with traumatic brain injuries. Last year two groups conducting tests on humans published compelling results.
The Mayo Clinic device was created by Michael Kahana, a professor of psychology at the University of Pennsylvania, and the medical technology company Medtronic Plc. Connected to the left temporal cortex, it monitors the brain’s electrical activity and forecasts whether a lasting memory will be created. “Just like meteorologists predict the weather by putting sensors in the environment that measure humidity and wind speed and temperature, we put sensors in the brain and measure electrical signals,” Kahana says. If brain activity is suboptimal, the device provides a small zap, undetectable to the patient, to strengthen the signal and increase the chance of memory formation. In two separate studies, researchers found the prototype consistently boosted memory 15 per cent to 18 per cent.
The second group performing human testing, a team from Wake Forest Baptist Medical Center in Winston-Salem, N.C., aided by colleagues at the University of Southern California, has a more finely tuned method. In a study published last year, their patients showed memory retention improvement of as much as 37 per cent. “We’re looking at questions like, ‘Where are my keys? Where did I park the car? Have I taken my pills?’ ” says Robert Hampson, lead author of the 2018 study.
To form memories, several neurons fire in a highly specific way, transmitting a kind of code. “The code is different for unique memories, and unique individuals,” Hampson says. By surveying a few dozen neurons in the hippocampus, the brain area responsible for memory formation, his team learned to identify patterns indicating correct and incorrect memory formation for each patient and to supply accurate codes when the brain faltered.
In presenting patients with hundreds of pictures, the group could even recognize certain neural firing patterns as particular memories. “We’re able to say, for example, ‘That’s the code for the yellow house with the car in front of it,’ ” says Theodore Berger, a professor of bioengineering at the University of Southern California who helped develop mathematical models for Hampson’s team.
Both groups have tested their devices only on epileptic patients with electrodes already implanted in their brains to monitor seizures; each implant requires clunky external hardware that won’t fit in somebody’s skull. The next steps will be building smaller implants and getting approval from the U.S. Food and Drug Administration to bring the devices to market. A startup called Nia Therapeutics Inc. is already working to commercialize Kahana’s technology.
Justin Sanchez, who just stepped down as director of Darpa’s biological technologies office, says veterans will be the first to use the prosthetics. “We have hundreds of thousands of military personnel with traumatic brain injuries,” he says. The next group will likely be stroke and Alzheimer’s patients. Eventually, perhaps, the general public will have access—though there’s a serious obstacle to mass adoption. “I don’t think any of us are going to be signing up for voluntary brain surgery anytime soon,” Sanchez says. “Only when these technologies become less invasive, or noninvasive, will they become widespread.”
Graduate student Dan Salmon has released online seven million Venmo transfers, scraped from the social payment biz in recent months, to call attention to the privacy risks of public transaction data.
Venmo, for the uninitiated, is an app that allows friends to pay each other money for stuff. El Reg‘s Bay Area vultures primarily use it for settling restaurant and bar bills that we have no hope of expensing; one person pays on their personal credit card, and their pals transfer their share via Venmo. It makes picking up the check a lot easier.
Because it’s the 2010s, by default, Venmo makes those transactions public along with attached messages and emojis, sorta like Twitter but for payments, allowing people to pry into strangers’ spending and interactions. Who went out with whom for drinks, who owed someone a sizable debt, who went on vacation, and so on.
“I am releasing this dataset in order to bring attention to Venmo users that all of this data is publicly available for anyone to grab without even an API key,” said Salmon in a post to GitHub. “There is some very valuable data here for any attacker conducting [open-source intelligence] research.”
[…]
Despite past criticism from privacy advocates and a settlement with the US Federal Trade Commission, Venmo has kept person-to-person purchases public by default.
When The Register asked about transaction privacy last year, after a developer created a bot that tweeted Venmo purchases mentioning drugs, a company spokesperson said, “Like on other social networks, Venmo users can choose what they want to share on the Venmo public feed. There are a number of different settings that users can customize when it comes to sharing payments on Venmo.”
The current message from the company is not much different: “Venmo was designed for sharing experiences with your friends in today’s social world, and the newsfeed has always been a big part of this,” a Venmo spokesperson told The Register in an email. “Our users trust us with their money and personal information, and we take this responsibility very seriously.”
“I think Venmo is resisting calls to make their data private because it would go against the entire pitch of the app,” said Salmon. “Venmo is designed to be a “‘social’ app and the more open and social you make things, the more you open yourself to problems.”
Venmo’s privacy policy details all the ways in which customer data is not private.
Spanish renewable energy giant and offshore wind energy leader Siemens Gamesa Renewable Energy last week inaugurated operations of its electrothermal energy storage system which can store up to 130 megawatt-hours of electricity for a week in volcanic rock.
[…]
The heat storage facility consists of around 1,000 tonnes of volcanic rock which is used as the storage medium. The rock is fed with electrical energy which is then converted into hot air by means of a resistance heater and a blower that, in turn, heats the rock to 750°C/1382 °F. When demand requires the stored energy, ETES uses a steam turbine to re-electrify the stored energy and feeds it back into the grid.
The new ETES facility in Hamburg-Altenwerder can store up to 130 MWh of thermal energy for a week, and storage capacity remains constant throughout the charging cycles.
Google Calendar was down for users around the world for nearly three hours earlier today. Calendar users trying to access the service were met with a 404 error message through a browser from around 10AM ET until around 12:40PM ET. Google’s Calendar service dashboard now reveals that issues should be resolved for everyone within the next hour.
“We expect to resolve the problem affecting a majority of users of Google Calendar at 6/18/19, 1:40 PM,” the message reads. “Please note that this time frame is an estimate and may change.” Google Calendar appears to have returned for most users, though. Other Google services such as Gmail and Google Maps appeared to be unaffected during the calendar outage, although Hangouts Meet reportedly experiencing some difficulties.
Google Calendar is currently experiencing a service disruption. Please stay tuned for updates or follow here: https://t.co/2SGW3X1cQn
Google Calendar’s issues come in the same month as another massive Google outage which saw YouTube, Gmail, and Snapchat taken offline because of problems with the company’s overall Cloud service. At the time, Google blamed “high levels of network congestion in the eastern USA” for the issues.
The outage also came just over an hour after Google’s G Suite twitter account sent out a tweet promoting Google Calendar’s ability to making scheduling simpler.
However, I recently met other open source developers that make a living from donations, and they helped widen my perspective. At Amsterdam.js, I heard Henry Zhu speak about sustainability in the Babel project and beyond, and it was a pretty dire picture. Later, over breakfast, Henry and I had a deeper conversation on this topic. In Amsterdam I also met up with Titus, who maintains the Unified project full-time. Meeting with these people I confirmed my belief in the donation model for sustainability. It works. But, what really stood out to me was the question: is it fair?
I decided to collect data from OpenCollective and GitHub, and take a more scientific sample of the situation. The results I found were shocking: there were two clearly sustainable open source projects, but the majority (more than 80%) of projects that we usually consider sustainable are actually receiving income below industry standards or even below the poverty threshold.
What the data says
I picked popular open source projects from OpenCollective, and selected the yearly income data from each. Then I looked up their GitHub repositories, to measure the count of stars, and how many “full-time” contributors they have had in the past 12 months. Sometimes I also looked up the Patreon pages for those few maintainers that had one, and added that data to the yearly income for the project. For instance, it is obvious that Evan You gets money on Patreon to work on Vue.js. These data points allowed me to measure: project popularity (a proportional indicator of the number of users), yearly revenue for the whole team, and team size.
[…]
Those that work full-time sometimes complement their income with savings or by living in a country with lower costs of living, or both (Sindre Sorhus).
Then, based on the latest StackOverflow developer survey, we know that the low end of developer salaries is around $40k, while the high end of developer salaries is above $100k. That range depicts the industry standard for developers, given their status as knowledge workers, many of which are living in OECD countries. This allowed me to classify the results into four categories:
BLUE: 6-figure salary
GREEN: 5-figure salary within industry standards
ORANGE: 5-figure salary below our industry standards
The first chart, below, shows team size and “price” for each GitHub star.
More than 50% of projects are red: they cannot sustain their maintainers above the poverty line. 31% of the projects are orange, consisting of developers willing to work for a salary that would be considered unacceptable in our industry. 12% are green, and only 3% are blue: Webpack and Vue.js. Income per GitHub star is important: sustainable projects generally have above $2/star. The median value, however, is $1.22/star. Team size is also important for sustainability: the smaller the team, the more likely it can sustain its maintainers.
The median donation per year is $217, which is substantial when understood on an individual level, but in reality includes sponsorship from companies that are doing this also for their own marketing purposes.
The next chart shows how revenue scales with popularity.
You can browse the data yourself by accessing this Dat archive with a LibreOffice Calc spreadsheet:
The total amount of money being put into open source is not enough for all the maintainers. If we add up all of the yearly revenue from those projects in this data set, it’s $2.5 million. The median salary is approximately $9k, which is below the poverty line. If split up that money evenly, that’s roughly $22k, which is still below industry standards.
The core problem is not that open source projects are not sharing the money received. The problem is that, in total numbers, open source is not getting enough money. $2.5 million is not enough. To put this number into perspective, startups get typically much more than that.
Tidelift has received $40 million in funding, to “help open source creators and maintainers get fairly compensated for their work” (quote). They have a team of 27 people, some of them ex-employees from large companies (such as Google and GitHub). They probably don’t receive the lower tier of salaries. Yet, many of the open source projects they showcase on their website are below poverty line regarding income from donations.
[…]
GitHub was bought by Microsoft for $7.5 billion. To make that quantity easier to grok, the amount of money Microsoft paid to acquire GitHub – the company – is more than 3000x what the open source community is getting yearly. In other words, if the open source community saved up every penny of the money they ever received, after a couple thousand years they could perhaps have enough money to buy GitHub jointly.
[…]
If Microsoft GitHub is serious about helping fund open source, they should put their money where their mouth is: donate at least $1 billion to open source projects. Even a mere $1.5 million per year would be enough to make all the projects in this study become green. The Matching Fund in GitHub Sponsors is not enough, it gives a maintainer at most just $5k in a year, which is not sufficient to raise the maintainer from the poverty threshold up to industry standard.
The man heading up any potentially US government antitrust probes into tech giants like Apple and Google used to work for… Apple and Google.
In the revolving-door world that is Washington DC, that conflict may not seem like much but one person isn’t having it: Senator Elizabeth Warren (D-MA) this week sent Makan Delrahim a snotagram in which she took issue with him overseeing tech antitrust efforts.
“I am writing to urge you to recuse yourself from the Department of Justice’s (DOJ) reported antitrust investigations into Google and Apple,” she wrote. “Although you are the chief antitrust attorney in the DoJ, your prior work lobbying the federal government on behalf of these and other companies in antitrust matters compromises your ability to manage or advise on this investigation without real or perceived conflicts of interest.”
Warren then outlines precisely what she means by conflict of interests: “In 2007, Google hired you to lobby federal antitrust officials on behalf of the company’s proposed acquisition of online advertising company DoubleClick, a $3.1 billion merger that the federal government eventually signed off on… You reported an estimated $100,000 in income from Google in 2007.”
It’s not just Google either. “In addition to the investigation into Google, the DoJ will also have jurisdiction over Apple. In both 2006 and 2007, Apple hired you to lobby the federal government on its behalf on patent reform issues,” Warren continues.
She notes: “Federal ethics law requires that individuals recuse themselves from any ‘particular matter involving specific parties’ if ‘the circumstances would cause a reasonable person with knowledge of the relevant facts to question his impartiality in the matter.’ Given your extensive and lucrative previous work lobbying the federal government on behalf of Google and Apple… any reasonable person would surely question your impartiality in antitrust matters…”
This is fine
Delrahim has also done work for a range of other companies including Anthem, Pfizer, Qualcomm, and Caesars but it’s the fact that he has specific knowledge and connections with the very highest levels of tech giants while being in charge of one of the most anticipated antitrust investigations of the past 30 years that has got people concerned.
This is ridiculous, of course, because Delrahim is a professional and works for whoever hires him. It’s not as if he would do something completely inappropriate like give a speech outside the United States in which he walks through exactly how he would carry out an antitrust investigation into tech giants and the holes that would exist in such an investigation, thereby giving them a clear blueprint to work against.
He definitely did not do that. What he actually did was talk about how it was possible to investigate tech giants, despite some claiming it wasn’t – which is, you’ll understand, quite the opposite.
“The Antitrust Division does not take a myopic view of competition,” Delrahim said during a speech in Israel this week. “Many recent calls for antitrust reform, or more radical change, are premised on the incorrect notion that antitrust policy is only concerned with keeping prices low. It is well-settled, however, that competition has price and non-price dimensions.”
Instead, he noted: “Diminished quality is also a type of harm to competition… As an example, privacy can be an important dimension of quality. By protecting competition, we can have an impact on privacy and data protection.”
So that’s diminished quality and privacy as lines of attack. Anything else, Makan?
“Generally speaking, an exclusivity agreement is an agreement in which a firm requires its customers to buy exclusively from it, or its suppliers to sell exclusively to it. There are variations of this restraint, such as requirements contracts or volume discounts,” he mused at the Antitrust New Frontiers Conference in Tel Aviv.
So it looks as though he is ignoring most of what is making this antitrust predatory as he’s mainly looking at price, then a bit at quality and privacy. Except he’s not looking at quality and privacy. Or leverrage. Or the waterbed effect. Or undercutting. Or product copying. Or vertical integration. Or aggression.
In an interview this week with CNN, Google CEO Sundar Pichai attempted to turn antitrust questions around by pointing to what they say is the silver lining of size: Big beats China. In the face of an intensifying push for antitrust action, the argument has been called tech’s version of “too big to fail.”
“Scale does offer many benefits, it’s important to understand that,” Google CEO Sundar Pichai said. “As a company, we sometimes invest five, ten years ahead without necessarily worrying about short term profits. If you think about how technology leadership contributes to leadership on a global economic scale. Big companies are what are investing in AI the most. There are many benefits to taking a long term view which big companies are able to do.”
Pichai, who did allow that scrutiny and competition were ultimately good things, made points that echoed arguments made by Facebook CEO Mark Zuckerberg who made his point a lot more frankly.
“I think you have this question from a policy perspective, which is, ‘Do we want American companies to be exporting across the world?’” Zuckerberg said last year. “I think that the alternative, frankly, is going to be the Chinese companies.”
Pichai never outright said the word “China” but he didn’t have to. China’s rising tech industry and increasingly tense relationship with the United States
“There are many countries around the world which aspire to be the next Silicon Valley. And they are supporting their companies, too,” Pichai said to CNN. “So we have to balance both. This doesn’t mean you don’t scrutinize large companies. But you have to balance it with the fact that you want big, successful companies as well.”
This has been one of Silicon Valley’s safest fallback arguments since antitrust sentiment began gaining steam in the United States. But the history of American industry offers a serious counterweight.
Columbia Law School professor Tim Wu spent much of 2018 outlining the case for antitrust action. He wrote a book on the subject, The Curse of Bigness: Antitrust in the New Gilded Age, and appeared all over media to make his argument. In an op-ed for the New York Times, Wu called back to the heated Japanese-American tech competition of the 1980s.
IBM faced an unprecedented international challenge in the mainframe market from Japan’s NEC while Sony, Panasonic, and Toshiba made giant leaps forward. The companies had the strong support of the Japanese government.
Had the United States followed the Zuckerberg logic, we would have protected and promoted IBM, AT&T and other American tech giants — the national champions of the 1970s. Instead, the federal government accused the main American tech firms of throttling competition. IBM was subjected to a devastating, 13-year-long antitrust investigation and trial, and the Justice Department broke AT&T into eight pieces in 1984. And indeed, the effect was to weaken some of America’s most powerful tech firms at a key moment of competition with a foreign power.
But something else happened as well. With IBM and AT&T under constant scrutiny, a whole series of industries and companies were born without fear of being squashed by a monopoly. The American software industry, freed from IBM, came to life, yielding companies like Microsoft, Sun and Lotus. Personal computers from Apple and other companies became popular, and after the breakup of AT&T, companies like CompuServe and America Online rushed into online networking, eventually yielding what we now call the “internet economy.”
Silicon Valley’s argument, however, does resonate. The 1980s is not the 2010s and the relationship between China and the U.S. today is significantly colder and even more complex than Japan and the U.S. three decades ago.
American politicians have echoed some of big tech’s concerns about Chinese leadership.
I’d agree with Wu – the China argument is a fear trap. Antitrust history – in the tech, oil and telephony industries, among others – has shown that when titans fall, many smaller, agile and much more innovative companies spring up to take their place, fueling employment gains, exports and better lifestyles for all of us.
Phantom Brigade is a hybrid turn-based & real-time tactical RPG, focusing on in-depth customization and player driven stories. As the last surviving squad of mech pilots, you must capture enemy equipment and facilities to level the playing field. Outnumbered and out-gunned, lead The Brigade through a desperate campaign to retake their war-torn homeland.
According to new research, Antlia 2’s current position is consistent with a collision with the Milky Way hundreds of millions of years ago that could have produced the perturbations we see today. The paper has been submitted for publication and is undergoing peer review.
Antlia 2 was a bit of a surprise when it showed up in the second Gaia mission data release last year. It’s really close to the Milky Way – one of our satellite galaxies – and absolutely enormous, about the size of the Large Magellanic Cloud.
But it’s incredibly diffuse and faint, and hidden from view by the galactic disc, so it managed to evade detection.
That data release also showed in greater detail ripples in the Milky Way’s disc. But astronomers had known about perturbations in that region of the disc for several years by that point, even if the data wasn’t as clear as that provided by Gaia.
It was based on this earlier information that, in 2009, astrophysicist Sukanya Chakrabarti of the Rochester Institute of Technology and colleagues predicted the existence of a dwarf galaxy dominated by dark matter in pretty much the exact location Antlia 2 was found nearly a decade later.
Using the new Gaia data, the team calculated Antlia 2’s past trajectory, and ran a series of simulations. These produced not just the dwarf galaxy’s current position, but the ripples in the Milky Way’s disc by way of a collision less than a billion years ago.
Simulation of the collision: The gas distribution is on the left, stars on the right. (RIT)
Previously, a different team of researchers had attributed these perturbations to an interaction with the Sagittarius Dwarf Spheroidal Galaxy, another of the Milky Way’s satellites.
Chakrabarti and her team also ran simulations of this scenario, and found that the Sagittarius galaxy’s gravity probably isn’t strong enough to produce the effects observed by Gaia.
“Thus,” the researchers wrote in their paper, “we argue that Antlia 2 is the likely driver of the observed large perturbations in the outer gas disk of the Galaxy.”
A bug impacting editors Vim and Neovim could allow a trojan code to escape sandbox mitigations.
A high-severity bug impacting two popular command-line text editing applications, Vim and Neovim, allow remote attackers to execute arbitrary OS commands. Security researcher Armin Razmjou warned that exploiting the bug is as easy as tricking a target into clicking on a specially crafted text file in either editor.
Razmjou’s PoC is able to bypass modeline mitigations, which execute value expressions in a sandbox. That’s to prevent somebody from creating a trojan horse text file in modelines, the researcher said.
“However, the :source! command (with the bang [!] modifier) can be used to bypass the sandbox. It reads and executes commands from a given file as if typed manually, running them after the sandbox has been left,” according to the PoC report.
“Beyond patching, it’s recommended to disable modelines in the vimrc (set nomodeline), to use the securemodelinesplugin, or to disable modelineexpr (since patch 8.1.1366, Vim-only) to disallow expressions in modelines,” the researcher said.
First off, you can’t click in vi, but OK. Second, the whole idea is that you can run commands from vi. So basically he is calling a functionality a flaw.
To see exactly how inscrutable they have become, I analyzed the length and readability of privacy policies from nearly 150 popular websites and apps. Facebook’s privacy policy, for example, takes around 18 minutes to read in its entirety – slightly above average for the policies I tested.
The comparison is between websites with a focus on Facebook and Google, but the main takeaway I think is that almost all privacy policies are complex, because they’re not there for the users.
A novel magnet half the size of a cardboard toilet tissue roll usurped the title of “world’s strongest magnetic field” from the metal titan that had held it for two decades at the Florida State University-headquartered National High Magnetic Field Laboratory.
And its makers say we ain’t seen nothing yet: By packing an exceptionally high-field magnet into a coil you could pack in a purse, MagLab scientists and engineers have shown a way to build and use electromagnets that are stronger, smaller and more versatile than ever before.
Their work is outlined in an article published today in the journal Nature.
“We are really opening a new door,” said MagLab engineer Seungyong Hahn, the mastermind behind the new magnet and an associate professor at the FAMU-FSU College of Engineering. “This technology has a very good potential to entirely change the horizons of high-field applications because of its compact nature.”
[…]
Both the 45-T magnet and the 45.5-T test magnet are built in part with superconductors, a class of conductors boasting special properties, including the ability to carry electricity with perfect efficiency.
The superconductors used in the 45-T are niobium-based alloys, which have been around for decades. But in the 45.5-T proof-of-principle magnet, Hahn’s team used a newer compound called REBCO (rare earth barium copper oxide) with many advantages over conventional superconductors.
Notably, REBCO can carry more than twice as much current as a same-sized section of niobium-based superconductor. This current density is crucial: After all, the electricity running through an electromagnet generates its field, so the more you can cram in, the stronger the field.
Also critical was the specific REBCO product used—paper-thin, tape-shaped wires manufactured by SuperPower Inc.
Credit: Florida State University
MagLab Chief Materials Scientist David Larbalestier, who is also a professor at the FAMU-FSU College of Engineering, saw the product’s promise to pack more power into a potential world-record magnet, and encouraged Hahn to give it a go.
The other key ingredient was not something they put in, but rather something they left out: insulation.
Today’s electromagnets contain insulation between conducting layers, which directs the current along the most efficient path. But it also adds weight and bulk.
Hahn’s innovation: A superconducting magnet without insulation. In addition to yielding a sleeker instrument, this design protects the magnet from a malfunction known as a quench. Quenches can occur when damage or imperfections in the conductor block the current from its designated path, causing the material to heat up and lose its superconducting properties. But if there is no insulation, that current simply follows a different path, averting a quench.
“The fact that the turns of the coil are not insulated from each other means that they can share current very easily and effectively in order to bypass any of these obstacles,” explained Larbalestier, corresponding author on the Nature paper.
There’s another slimming aspect of Hahn’s design that relates to quenches: Superconducting wires and tapes must incorporate some copper to help dissipate heat from potential hot spots. His “no-insulation” coil, featuring tapes a mere 0.043-mm thick, requires much less copper than do conventional magnets.
Britain’s Home Secretary Sajid Javid told BBC Radio today that he has signed the extradition order for Julian Assange, paving the way for the WikiLeaks founder to be sent to the U.S. to face charges of computer hacking and espionage.
“There’s an extradition request from the U.S. that is before the courts tomorrow, but yesterday I signed the extradition order, certified it, and that will be going in front of the courts tomorrow,” Javid said according to Australia’s public broadcaster, the ABC.
Assange is scheduled to appear in a UK court on Friday, though it’s not clear whether he’ll appear by video link or in person.
“It’s a decision ultimately for the courts but there is a very important part of it for the Home Secretary and I want to see justice done at all times, and we’ve got a legitimate extradition request so I’ve signed it, but the final decision is now with the courts,” Javid continued.
Curiously, Home Secretary Javid signed the extradition paperwork despite not being on the best terms with the U.S. government right now. Javid wasn’t invited to attend formal ceremonies when President Donald Trump recently visited the UK and some believe it’s because Javid criticized Trump’s treatment of Muslims in 2017 as well as the American president’s retweets of the far right group Britain First. Javid has a Muslim background, though he insists he doesn’t know why he wasn’t invited to the recent U.S.-focused events in Britain.
Assange is currently being held in Belmarsh prison in southern London and is serving a 50-week sentence for jumping bail in 2012. Assange sought asylum during the summer of 2012 at Ecuador’s embassy in London, where he lived for almost seven years until this past April. Ecuador revoked Assange’s asylum and the WikiLeaks founder was physically dragged out of the embassy by British police.
WikiLeaks founder Julian Assange, a 47-year-old Australian national, appears to be one step closer to being sent to the United States, but the deal is not done, as Javid notes. Not only does the extradition order need final approval by the UK court, there’s still the question of whether Assange could be sent to Sweden to face sexual assault charges.
The statute of limitation has expired for one of the sexual assault claims made against Assange in Sweden, but a rape claim could still be pursued if Swedish prosecutors decide to push the case. A Swedish court ruled earlier this month that Assange should not be detained in absentia, the first move under Swedish law that would have paved the way for his extradition.
Assange’s Swedish lawyer has previously claimed that Assange was too ill to even appear in court via video link, but secret video seemingly recorded by another inmate recently showed Assange looking relatively normal and healthy.
Assange has been charged with 18 counts by the U.S. Justice Department, including one under the Espionage Act, which potentially carries the death penalty. But American prosecutors supposedly gave Ecuador a “verbal pledge” that they won’t pursue death in Assange’s case, according to American news channel ABC. Obviously, a “verbal pledge” is not something that would hold up in court.
As far back as 2015, major companies like Sony and Intel have sought to crowdsource efforts to secure their systems and applications through the San Francisco startup HackerOne. Through the “bug bounty” program offered by the company, hackers once viewed as a nuisance—or worse, as criminals—can identify security vulnerabilities and get paid for their work.
On Tuesday, HackerOne published a wealth of anonymized data to underscore not only the breadth of its own program but highlight the leading types of bugs discovered by its virtual army of hackers who’ve reaped financial rewards through the program. Some $29 million has been paid out so far with regards to the top 10 most rewarded types of security weakness alone, according to the company.
HackerOne markets the bounty program as a means to safely mimic an authentic kind of global threat. “It’s one of the best defenses you can have against what you’re actually protecting against,” said Miju Han, HackerOne’s director of product management. “There are a lot of security tools out there that have theoretically risks—and we definitely endorse those tools as well. But what we really have in bug bounty programs is a real-world security risk.”
The program, of course, has its own limitations. Participants have the ability to define the scope of engagement and in some cases—as with the U.S. Defense Department, a “hackable target”—place limits on which systems and methods are authorized under the program. Criminal hackers and foreign adversaries are, of course, not bound by such rules.
“Bug bounties can be a helpful tool if you’ve already invested in your own security prevention and detection,” said Katie Moussouris, CEO of Luta Security, “in terms of secure development if you publish code, or secure vulnerability management if your organization is mostly just trying to keep up with patching existing infrastructure.”
“It isn’t suitable to replace your own preventative measures, nor can it replace penetration testing,” she said.
Not surprisingly, HackerOne’s data shows that overwhelmingly cross-site scripting (XSS) attacks—in which malicious scripts are injected into otherwise trusted sites—remain the top vulnerability reported through the program. Of the top 10 types of bugs reported, XSS makes up 27 percent. No other type of bug comes close. Through HackerOne, some $7.7 million has been paid out to address XSS vulnerabilities alone.
Cloud migration has also led to a rise in exploits such as server-side request forgery (SSRF). “The attacker can supply or modify a URL which the code running on the server will read or submit data to, and by carefully selecting the URLs, the attacker may be able to read server configuration such as AWS metadata, connect to internal services like http-enabled databases or perform post requests towards internal services which are not intended to be exposed,” HackerOne said.
Currently, SSRF makes up only 5.9 percent of the top bugs reported. Nevertheless, the company says, these server-side exploits are trending upward as more and more companies find homes in the cloud.
Other top bounties include a range of code injection exploits or misconfigurations that allow improper access to systems that should be locked down. Companies have paid out over $1.5 million alone to address improper access control.
“Companies that pay more for bounties are definitely more attractive to hackers, especially more attractive to top hackers,” Han said. “But we know that bounties paid out are not the only motivation. Hackers like to hack companies that they like using, or that are located in their country.” In other words, even though a company is spending more money to pay hackers to find bugs, it doesn’t necessarily mean that they have more security.
“Another factor is how fast a company is changing,” she said. “If a company is developing very rapidly and expanding and growing, even if they pay a lot of bounties, if they’re changing up their code base a lot, then that means they are not necessary as secure.”
According to an article this year in TechRepublic, some 300,000 hackers are currently signed up with HackerOne; though only 1-in-10 have reportedly claimed a bounty. The best of them, a group of roughly 100 hackers, have earned over $100,000. Only a couple of elite hackers have attained the highest-paying ranks of the program, reaping rewards close to, or in excess of, $1 million.
View a full breakdown of HackerOne’s “most impactful and rewarded” vulnerability types here.
The well-known and respected data breach notification website “Have I Been Pwned” is up for sale.
Troy Hunt, its founder and sole operator, announced the sale on Tuesday in a blog post where he explained why the time has come for Have I Been Pwned to become part of something bigger and more organized.
“To date, every line of code, every configuration and every breached record has been handled by me alone. There is no ‘HIBP team’, there’s one guy keeping the whole thing afloat,” Hunt wrote. “it’s time for HIBP to grow up. It’s time to go from that one guy doing what he can in his available time to a better-resourced and better-funded structure that’s able to do way more than what I ever could on my own.”
Over the years, Have I Been Pwned has become the repository for data breaches on the internet, a place where users can search for their email address and see whether they have been part of a data breach. It’s now also a service where people can sign up to get notified whenever their accounts get breached. It’s perhaps the most useful, free, cybersecurity service in the world.
Spain’s data protection agency has fined La Liga, the nation’s top professional soccer league, 250,000 euros ($283,000 USD) for using the league’s phone app to spy on its fans. With millions of downloads, the app was reportedly being used to surveil bars in an effort to catch establishments playing matches on television without a license.
The La Liga app provides users with schedules, player rankings, statistics, and league news. It also knows when they’re watching games and where.
According to Spanish newspaper El País, the league told authorities that when its apps detected users were in bars the apps would record audio through phone microphones. The apps would then use the recording to determine if the user was watching a soccer game, using technology that’s similar to the Shazam app. If a game was playing in the vicinity, officials would then be able to determine if that bar location had a license to play the game.
So not only was the app spying on fans, but it was also turning those fans into unwitting narcs. El Diario reports that the app has been downloaded 10 million times.
On June 6, more than 70,000 BGP routes were leaked from Swiss colocation company Safe Host to China Telecom in Frankfurt, Germany, which then announced them on the global internet. This resulted in a massive rerouting of internet traffic via China Telecom systems in Europe, disrupting connectivity for netizens: a lot of data that should have gone to European cellular networks was instead piped to China Telecom-controlled boxes.
BGP leaks are common – they happen every hour of every day – though the size of this one and particularly the fact it lasted for two hours, rather than seconds or minutes, has prompted more calls for ISPs to join an industry program that adds security checks to the routing system.
The fact that China Telecom, which peers with Safe House, was again at the center of the problem – with traffic destined for European netizens routed through its network – has also made internet engineers suspicious, although they have been careful not to make any accusations without evidence.
“China Telecom, a major international carrier, has still implemented neither the basic routing safeguards necessary both to prevent propagation of routing leaks nor the processes and procedures necessary to detect and remediate them in a timely manner when they inevitably occur,” noted Oracle Internet Intelligence’s (OII) director of internet analysis Doug Madory in a report. “Two hours is a long time for a routing leak of this magnitude to stay in circulation, degrading global communications.”
A team at network security outfit vpnMentor was scanning cyber-space as part of a web-mapping project when they happened upon a Graylog management server belonging to Tech Data that had been left freely accessible to the public. Within that database, we’re told, was a 264GB cache of information including emails, payment and credit card details, and unencrypted usernames and passwords. Pretty much everything you need to ruin someone’s day (or year).
The exposure, vpnMentor told The Register today, is particularly bad due to the nature of Tech Data’s customers. The Fortune 500 distie provides everything from financing and marketing services to IT management and user training courses. Among the clients listed on its site are Apple, Symantec, and Cisco.
“This is a serious leak as far as we can see, so much so that all of the credentials needed to log in to customer accounts are available,” a spokesperson for vpnMentor told El Reg. “Because of the size of the database, we could not go through all of it and there may be more sensitive information available to the public than what we have disclosed here.”
In addition to the login credentials and card information, the researchers said they were able to find private API keys and logs in the database, as well as customer profiles that included full names, job titles, phone numbers, and email and postal addresses. All available to anyone who could find it.
vpnMentor says it discovered and reported the open database on June 2 to Tech Data, and by June 4 the distie had told the team it had secured the database and hidden it from public view. Tech Data did not respond to a request for comment from The Register. The US-based company did not mention the incident in its most recent SEC filings.
View the full-size version of the infographic by clicking here
The first representatives of Generation Z have started to trickle into the workplace – and like generations before them, they are bringing a different perspective to things.
Did you know that there are now up to five generations now working under any given roof, ranging all the way from the Silent Generation (born Pre-WWII) to the aforementioned Gen Z?
Let’s see how these generational groups differ in their approaches to communication, career priorities, and company loyalty.
Generational Differences at Work
Today’s infographic comes to us from Raconteur, and it breaks down some key differences in how generational groups are thinking about the workplace.
Let’s dive deeper into the data for each category.
Communication
How people prefer to communicate is one major and obvious difference that manifests itself between generations.
While many in older generations have dabbled in new technologies and trends around communications, it’s less likely that they will internalize those methods as habits. Meanwhile, for younger folks, these newer methods (chat, texting, etc.) are what they grew up with.
Top three communication methods by generation:
Baby Boomers:
40% of communication is in person, 35% by email, and 13% by phone
Gen X:
34% of communication is in person, 34% by email, and 13% by phone
Millennials:
33% of communication is by email, 31% is in person, and 12% by chat
Gen Z:
31% of communication is by chat, 26% is in person, and 16% by emails
Motivators
Meanwhile, the generations are divided on what motivates them in the workplace. Boomers place health insurance as an important decision factor, while younger groups view salary and pursuing a passion as being key elements to a successful career.
Three most important work motivators by generation (in order):
Baby Boomers:
Health insurance, a boss worthy of respect, and salary
Gen X:
Salary, job security, and job challenges/excitement
Millennials:
Salary, job challenges/excitement, and ability to pursue passion
Gen Z:
Salary, ability to pursue passion, and job security
Loyalty
Finally, generational groups have varying perspectives on how long they would be willing to stay in any one role.
Baby Boomers: 8 years
Gen X: 7 years
Millennials: 5 years
Gen Z: 3 years
Given the above differences, employers will have to think clearly about how to attract and retain talent across a wide scope of generations. Further, employers will have to learn what motivates each group, as well as what makes them each feel the most comfortable in the workplace.
The investigation will include a series of hearings held by the Subcommittee on Antitrust, Commercial and Administrative Law on the rise of market power online, as well as requests for information that are relevant to the investigation.
A small number of dominant, unregulated platforms have extraordinary power over commerce, communication and information online. Based on investigative reporting and oversight by international policymakers and enforcers, there are concerns that these platforms have the incentive and ability to harm the competitive process. The Antitrust Subcommittee will conduct a top-to-bottom review of the potential of giant tech platforms to hold monopoly power.
The committee’s investigation will focus on three main areas:
Documenting competition problems in digital markets;
Examining whether dominant firms are engaging in anti-competitive conduct; and
Assessing whether existing antitrust laws, competition policies and current enforcement levels are adequate to address these issues.
“Big Tech plays a huge role in our economy and our world,” said Collins. “As tech has expanded its market share, more and more questions have arisen about whether the market remains competitive. Our bipartisan look at competition in the digital markets gives us the chance to answer these questions and, if necessary, to take action. I appreciate the partnership of Chairman Nadler, Subcommittee Chairman Cicilline and Subcommittee Ranking Member Sensenbrenner on these important issues.”
“The open internet has delivered enormous benefits to Americans, including a surge of economic opportunity, massive investment, and new pathways for education online,” said Nadler. “But there is growing evidence that a handful of gatekeepers have come to capture control over key arteries of online commerce, content, and communications. The Committee has a rich tradition of conducting studies and investigations to assess the threat of monopoly power in the U.S. economy. Given the growing tide of concentration and consolidation across our economy, it is vital that we investigate the current state of competition in digital markets and the health of the antitrust laws.”
“Technology has become a crucial part of Americans’ everyday lives,” said Sensenbrenner. “As the world becomes more dependent on a digital marketplace, we must discuss how the regulatory framework is built to ensure fairness and competition. I believe these hearings can be informative, but it is important for us to avoid any predetermined conclusions. I thank Chairman Nadler, Ranking Member Collins, and Chairman Cicilline as we begin these bipartisan discussions.”
“The growth of monopoly power across our economy is one of the most pressing economic and political challenges we face today. Market power in digital markets presents a whole new set of dangers,” said Cicilline. “After four decades of weak antitrust enforcement and judicial hostility to antitrust cases, it is vital for Congress to step in to determine whether existing laws are adequate to tackle abusive conduct by platform gatekeepers or if we need new legislation.”
Basically they are looking at how antitrust works, which is a great thing, because recently antitrust in the US has focused on consumer prices and ignored everything else. With the price gauging of Amazon, this is not the way to look at things. Have a look at my talk on this if you’re interested