The Linkielist

Linking ideas with the world

The Linkielist

GCHQ vulnerability disclosure process and cops hacking you now need a judge to decide if it’s legal in the UK

On the same day that certain types of British state-backed hacking now need a judge-issued warrant to carry out, GCHQ has lifted the veil and given the infosec world a glimpse inside its vuln-hoarding policies.

The spying agency’s internal Equities Process is the way by which it decides whether or not to tell tech vendors that its snoopers have discovered a hardware or software vulnerability.

A hot topic for many years, vuln disclosure (and patching) is a double-edged sword for spy agencies. If they keep discovered vulns to themselves, they can exploit them for their own ends, for which the public reason is given as disrupting “the activities of those who seek to do the UK harm” – including Belgian phone operators.

If GCHQ discloses vulns it has found to the affected vendor, that can “benefit global users of the technology”, in the agency’s words, as well as tending to build trust – something the Peeping Tom agency is dead keen on following the international damage done to its reputation after the Snowden disclosures.

However, in a briefing note today the agency revealed it may keep vulns in unsupported software to itself. “Where the software in question is no longer supported by the vendor,” it said, “were a vulnerability to be discovered in such software, there would be no route by which it could be patched.”

Only last year Microsoft prez Brad Smith was raging against GCHQ’s American cousins, the NSA, for the “stockpiling of vulnerabilities by governments” – though, as we revealed, Microsoft had been sitting on a pile of patches that were only provided to corporate customers and not the public, so not everyone in this debate is squeaky clean.

Lovely bureaucracy

When it decides whether or not to give up a vuln, GCHQ said three internal bodies are involved: the Equities Technical Panel, made up of “subject matter expert” spies; the GCHQ Equity Board, which is chaired by a civil servant from GCHQ’s public-facing arm, the National Cyber Security Centre (NCSC), and staffed by people from other government departments; and the Equities Oversight Committee, chaired by the chief exec of the NCSC, Ciaran Martin.

Broadly speaking, Martin gets the final word on whether or not a vuln is “released” to be patched. Those decisions are “regularly reviewed at a period appropriate to the security risk” and, regardless of the risk, “at least every 12 months”.

What do they review? Operational necessity (“How reliant are we on this vulnerability to realise intelligence?”) is one criterion, as well as the impact on other British government departments’ activities. Questions about whether the vuln could be spotted independently by others and used to harm business and private citizens is considered under the general category of “defensive risk”, but appears to be less of a priority than looking at whether the state will find its wings clipped as a result of disclosure.

Even then, the agency would rather nudge industry into applying “configuration changes” to mitigate against vulns rather than seeing a proper patch deployed after disclosure. The reason is obvious: not everyone implements config changes, meaning some GCHQ targets may continue to be vulnerable to “network exploitation”.

“Assessment in relation to a number of these factors is based on standardised criteria and past experience, including applying the use of the Common Vulnerability Scoring System where appropriate,” said GCHQ.

Good stuff, now go and get a proper warrant

Today a post-Snowden legal tweak comes into force: state employees wanting to hack targets’ networks and devices must now get a judge-issued warrant, under section 106 of the Investigatory Powers Act.

“Such warrants can then be issued from 5th December. However unless urgent, the warrant will need to be reviewed and approved by a Judicial Commissioner,” noted the Society for Computers and Law in an update about the new law. It added that from January, law enforcement agencies will have to use this process to insert probes into suspected hackers’ gear.

Using hacking tools to investigate alleged crimes that fall under sections 1 to 3 of the Computer Misuse Act 1990 is now subject to the “equipment interference warrant” procedure, rather than the bog-standard Police Act 1997 “property interference authorisation”.

The difference is that state-backed hackers set out to find “communications, private information or equipment data”, which therefore needs a different set of legal protections than the Police Act process, which was written around slightly different scenarios such as planting tracker bugs on cars. ®

Bootnote

“In exceptional cases, the CEO of the NCSC may decide that further escalation via submissions to Director GCHQ and, if required, the Foreign Secretary should be invoked,” said the GCHQ press briefing note, giving rise to images of spy agency suits pacing in circles around a smoking server and chanting Jeremy Hunt’s name, falling to their knees in gratitude when the mystical foreign secretary himself appears in a flash of lightning, ready to dispense vuln-disclosing justice.

We encourage GCHQ-based readers to send us videos of this process if this is actually what goes on.

Source: GCHQ opens kimono for infosec world to ogle its vuln disclosure process • The Register

Mass router hack exposes millions of devices to potent NSA exploit through UPNP

More than 45,000 Internet routers have been compromised by a newly discovered campaign that’s designed to open networks to attacks by EternalBlue, the potent exploit that was developed by, and then stolen from, the National Security Agency and leaked to the Internet at large, researchers said Wednesday.

The new attack exploits routers with vulnerable implementations of Universal Plug and Play to force connected devices to open ports 139 and 445, content delivery network Akamai said in a blog post. As a result, almost 2 million computers, phones, and other network devices connected to the routers are reachable to the Internet on those ports. While Internet scans don’t reveal precisely what happens to the connected devices once they’re exposed, Akamai said the ports—which are instrumental for the spread of EternalBlue and its Linux cousin EternalRed—provide a strong hint of the attackers’ intentions.

The attacks are a new instance of a mass exploit the same researchers documented in April. They called it UPnProxy because it exploits Universal Plug and Play—often abbreviated as UPnP—to turn vulnerable routers into proxies that disguise the origins of spam, DDoSes, and botnets. In Wednesday’s blog post, the researchers wrote:

Source: Mass router hack exposes millions of devices to potent NSA exploit | Ars Technica

When the Internet Archive Forgets

When the Internet Archive Forgets

On the internet, there are certain institutions we have come to rely on daily to keep truth from becoming nebulous or elastic. Not necessarily in the way that something stupid like Verrit aspired to, but at least in confirming that you aren’t losing your mind, that an old post or article you remember reading did, in fact, actually exist. It can be as fleeting as using Google Cache to grab a quickly deleted tweet, but it can also be as involved as doing a deep dive of a now-dead site’s archive via the Wayback Machine. But what happens when an archive becomes less reliable, and arguably has legitimate reasons to bow to pressure and remove controversial archived material?

A few weeks ago, while recording my podcast, the topic turned to the old blog written by The Ultimate Warrior, the late bodybuilder turned chiropractic student turned pro wrestler turned ranting conservative political speaker under his legal name of, yes, “Warrior.” As described by Deadspin’s Barry Petchesky in the aftermath of Warrior’s 2014 passing, he was “an insane dick,” spouting off in blogs and campus speeches about people with disabilities, gay people, New Orleans residents, and many others. But when I went looking for a specific blog post, I saw that the blogs were not just removed, the site itself was no longer in the Internet Archive, replaced by the error message: “This URL has been excluded from the Wayback Machine.”

Apparently, Warrior’s site had been de-archived for months, not long after Rob Rousseau pored over it for a Vice Sports article on the hypocrisy of WWE using Warrior’s image for their Breast Cancer Awareness Month campaign. The campaign was all about getting women to “Unleash Your Warrior,” complete with an Ultimate Warrior motif, but since Warrior’s blogs included wishing death on a cancer-survivor, this wasn’t a good look. Rousseau was struck by how the archive was removed “almost immediately after my piece went up, like within that week,” he told Gizmodo.

Rousseau suspected that WWE was somehow behind it, but a WWE spokesman told Gizmodo that they were not involved. Steve Wilton, the business manager for Ultimate Creations also denied involvement. A spokesman for the Internet Archive, though, told Gizmodo that the archive was removed because of a DMCA takedown request from the company’s business manager (Wilton’s job for years) on October 29, 2017, two days after the Vice article was published. (He has not replied to a follow-up email about the takedown request.)

Over the last few years, there has been a change in how the Wayback Machine is viewed, one inspired by the general political mood. What had long been a useful tool when you came across broken links online is now, more than ever before, seen as an arbiter of the truth and a bulwark against erasing history.

That archive sites are trusted to show the digital trail and origin of content is not just a must-use tool for journalists, but effective for just about anyone trying to track down vanishing web pages. With that in mind, that the Internet Archive doesn’t really fight takedown requests becomes a problem. That’s not the only recourse: When a site admin elects to block the Wayback crawler using a robots.txt file, the crawling doesn’t just stop. Instead, the Wayback Machine’s entire history of a given site is removed from public view.

In other words, if you deal in a certain bottom-dwelling brand of controversial content and want to avoid accountability, there are at least two different, standardized ways of erasing it from the most reliable third-party web archive on the public internet.

For the Internet Archive, like with quickly complying with takedown notices challenging their seemingly fair use archive copies of old websites, the robots.txt strategy, in practice, does little more than mitigating their risk while going against the spirit of the protocol. And if someone were to sue over non-compliance with a DMCA takedown request, even with a ready-made, valid defense in the Archive’s pocket, copyright litigation is still incredibly expensive. It doesn’t matter that the use is not really a violation by any metric. If a rightsholder makes the effort, you still have to defend the lawsuit.

“The fair use defense in this context has never been litigated,” noted Annemarie Bridy, a law professor at the University of Idaho and an Affiliate Scholar at the Center for Internet and Society at Stanford Law School. “Internet Archive is a non-profit, so the exposure to statutory damages that they face is huge, and the risk that they run is pretty great … given the scope of what they do; that they’re basically archiving everything that is on the public web, their exposure is phenomenal. So you can understand why their impulse might be to act cautiously even if that creates serious tension with their core mission, which is to create an accurate historical archive of everything that has been there and to prevent people from wiping out evidence of their history.”

While the Internet Archive did not respond to specific questions about its robots.txt policy, its proactive response to takedown requests, or if any potential fair use defenses have been tested by them in court, a spokesperson did send this statement along:

Several months after the Wayback Machine was launched in late 2001, we participated with a group of outside archivists, librarians, and attorneys in the drafting of a set of recommendations for managing removal requests (the Oakland Archive Policy) that the Internet Archive more or less adopted as guidelines over the first decade or so of the Wayback Machine.

Earlier this year, we convened with a similar group to review those guidelines and explore the potential value of an updated version. We are still pondering many issues and hope that before too long we might be able to present some updated information on our site to better help the public understand how we approach take down requests. You can find some of our thoughts about robots.txt at http://blog.archive.org/2017/04/17/robots-txt-meant-for-search-engines-dont-work-well-for-web-archives/.

At the end of the day, we strive to strike a balance between the concerns that site owners and rights holders sometimes bring to us with the broader public interest in free access for everyone to a history of the Internet that is as comprehensive as possible.

All of that said, the Internet Archive has always held itself out to be a library; in theory, shouldn’t that matter?

“Under current copyright law, although there are special provisions that give certain rights to libraries, there is no definition of a library,” explained Brandon Butler, the Director of Information Policy for the University of Virginia Library. “And that’s a thing that rights holders have always fretted over, and they’ve always fretted over entities like the Internet Archive, which aren’t 200-year-old public libraries, or university-affiliated libraries. They often raise up a stand that there will be faux libraries, that they’d call themselves libraries but it’s really just a haven for piracy. That specter of the sort of sham library really hasn’t arisen.” The lone exception that Butler could think of was when American Buddha, a non-profit, online library of Buddhist texts, found itself sued by Penguin over a few items that they asserted copyright over. “The court didn’t really care that this place called itself a library; it didn’t really shield them from any infringement allegations.” That said, as Butler notes, while being a library wouldn’t necessarily protect the Internet Archive as much as it could, “the right to make copies for preservation,” as Butler puts it, is definitely a point in their favor.

That said, “libraries typically don’t get sued; it’s bad PR,” Butler says. So it’s not like there’s a ton of modern legal precedent about libraries in the digital age, barring some outliers like the various Google Books cases.

As Bridy notes, in the United States, copyright is “a commercial right.” It’s not about reputational harm, it’s about protecting the value of a work and, more specifically, the ability to continuously make money off of it. “The reason we give it is we want artists and creative people to have an incentive to publish and market their work,” she said. “Using copyright as a way of trying to control privacy or reputation … it can be used that way, but you might argue that’s copyright misuse, you might argue it falls outside of the ambit of why we have copyright.”

We take a lot of things for granted, especially as we rely on technology more and more. “The internet is forever” may be a common refrain in the media, and the underlying wisdom about being careful may be sound, but it is also not something that should be taken literally. People delete posts. Websites and entire platforms disappear for business and other reasons. Rich, famous, and powerful bad actors don’t care about intimidating small non-profit organizations. It’s nice to have safeguards, but there are limits to permanence on the internet, and where there are limits, there are loopholes.

Source: When the Internet Archive Forgets

Another thing seriously broken with copyright

In China, your car could be talking to the government

When Shan Junhua bought his white Tesla Model X, he knew it was a fast, beautiful car. What he didn’t know is that Tesla constantly sends information about the precise location of his car to the Chinese government.

Tesla is not alone. China has called upon all electric vehicle manufacturers in China to make the same kind of reports — potentially adding to the rich kit of surveillance tools available to the Chinese government as President Xi Jinping steps up the use of technology to track Chinese citizens.

“I didn’t know this,” said Shan. “Tesla could have it, but why do they transmit it to the government? Because this is about privacy.”

More than 200 manufacturers, including Tesla, Volkswagen, BMW, Daimler, Ford, General Motors, Nissan, Mitsubishi and U.S.-listed electric vehicle start-up NIO, transmit position information and dozens of other data points to government-backed monitoring centers, The Associated Press has found. Generally, it happens without car owners’ knowledge.

The automakers say they are merely complying with local laws, which apply only to alternative energy vehicles. Chinese officials say the data is used for analytics to improve public safety, facilitate industrial development and infrastructure planning, and to prevent fraud in subsidy programs.

China has ordered electric car makers to share real-time driving data with the government. The country says it’s to ensure safety and improve the infrastructure, but critics worry the tracking can be put to more nefarious uses. (Nov. 29)

But other countries that are major markets for electronic vehicles — the United States, Japan, across Europe — do not collect this kind of real-time data.

And critics say the information collected in China is beyond what is needed to meet the country’s stated goals. It could be used not only to undermine foreign carmakers’ competitive position, but also for surveillance — particularly in China, where there are few protections on personal privacy. Under the leadership of Xi Jinping, China has unleashed a war on dissent, marshalling big data and artificial intelligence to create a more perfect kind of policing, capable of predicting and eliminating perceived threats to the stability of the ruling Communist Party.

There is also concern about the precedent these rules set for sharing data from next-generation connected cars, which may soon transmit even more personal information.

Source: In China, your car could be talking to the government

Companies ‘can sack workers for refusing to use fingerprint scanners’ in Australia

Businesses using fingerprint scanners to monitor their workforce can legally sack employees who refuse to hand over biometric information on privacy grounds, the Fair Work Commission has ruled.

The ruling, which will be appealed, was made in the case of Jeremy Lee, a Queensland sawmill worker who refused to comply with a new fingerprint scanning policy introduced at his work in Imbil, north of the Sunshine Coast, late last year.

Fingerprint scanning was used to monitor the clock-on and clock-off times of about 150 sawmill workers at two sites and was preferred to swipe cards because it prevented workers from fraudulently signing in on behalf of their colleagues to mask absences.

The company, Superior Woods, had no privacy policy covering workers and failed to comply with a requirement to properly notify individuals about how and why their data was being collected and used. The biometric data was stored on servers located off-site, in space leased from a third party.

Lee argued the business had never sought its workers’ consent to use fingerprint scanning, and feared his biometric data would be accessed by unknown groups and individuals.

“I am unwilling to consent to have my fingerprints scanned because I regard my biometric data as personal and private,” Lee wrote to his employer last November.

“Information technology companies gather as much information/data on people as they can.

“Whether they admit to it or not. (See Edward Snowden) Such information is used as currency between corporations.”

Lee was neither antagonistic or belligerent in his refusals, according to evidence before the commission. He simply declined to have his fingerprints scanned and continued using a physical sign-in booklet to record his attendance.

He had not missed a shift in more than three years.

The employer warned him about his stance repeatedly, and claimed the fingerprint scanner did not actually record a fingerprint, but rather “a set of data measurements which is processed via an algorithm”. The employer told Lee there was no way the data could be “converted or used as a finger print”, and would only be used to link to his payroll number to his clock-on and clock-off time. It said the fingerprint scanners were also needed for workplace safety, to accurately identify which workers were on site in the event of an accident.

Lee was given a final warning in January, and responded that he valued his job a “great deal” and wanted to find an alternative way to record his attendance.

“I would love to continue to work for Superior Wood as it is a good, reliable place to work,” he wrote to his employer. “However, I do not consent to my biometric data being taken. The reason for writing this letter is to impress upon you that I am in earnest and hope there is a way we can negotiate a satisfactory outcome.”

Lee was sacked in February, and lodged an unfair dismissal claim in the Fair Work Commission.

Source: Companies ‘can sack workers for refusing to use fingerprint scanners’ | World news | The Guardian

You only have one set of fingerprints – that’s the problem with biometrics: they can’t be changed, so you really really don’t want them stolen from you

Marriott’s Starwood hotels mega-hack: Half a BILLION guests’ deets exposed over 4 years

US hotel chain Marriott has admitted that a breach of its Starwood subsidiary’s guest reservation network has exposed the entire database – all 500 million guest bookings over four years, making this one of the biggest hacks of an individual org ever.

“On September 8, 2018, Marriott received an alert from an internal security tool regarding an attempt to access the Starwood guest reservation database in the United States,” said the firm in a statement issued this morning. “Marriott learned during the investigation that there had been unauthorized access to the Starwood network since 2014.”

Around 327 million of those guest bookings included customers’ “name, mailing address, phone number, email address, passport number, Starwood Preferred Guest (‘SPG’) account information, date of birth, gender, arrival and departure information, reservation date, and communication preferences.”

For an unspecified number, encrypted card numbers and expiration dates were also included, though Marriott insisted there was AES-128 grade encryption on these details, saying: “There are two components needed to decrypt the payment card numbers, and at this point, Marriott has not been able to rule out the possibility that both were taken.”

This could be read as a reference to salting and hashing though no further detail was supplied. We have contacted Marriott to double-check and will update this article if we hear back from them.

Source: Marriott’s Starwood hotels mega-hack: Half a BILLION guests’ deets exposed over 4 years

Google wants to spy on you and then report on you

suggesting, automatically implementing, or both suggesting and automatically implementing, one or more household policies to be implemented within a household environment. The household policies include one or more input criteria that is derivable from at least one smart device within the household environment, the one or more input criteria relating to a characteristic of the household environment, a characteristic of one or more occupants of the household, or both. The household policies also include one or more outputs to be provided based upon the one or more input criteria.

https://patents.justia.com/patent/10114351

Source: Patent Images

Eg.page 16, figure 25 – monitor TV watching patterns and report on you

page 20, figure 33 – detect time brushing teeth and report on you

Do you trust Google to be your parent?!

Google patents a way for your smart devices to spy on you, serve you ads, even if your privacy settings says no

In some embodiments, the private network may include at least one first device that captures information about its surrounding environment, such as data about the people and/or objects in the environment. The first device may receive a set of potential content sent from a server external to the private network. The first device may select at least one piece of content to present from the set of potential content based in part on the people/object data and/or a score assigned by the server to each piece of content. The private network may also include at least one second device that receives the captured people/object data sent from the first device. The second device may also receive a set of potential content sent from the server external to the private network. The second device may select at least one piece of content to present from the set of potential content based in part on the people/object data sent from the first device and/or a score assigned by the server to each piece of content. Using the private network to communicate the people/object data between devices may preserve the privacy of the user since the data is not sent to the external server. Further, using the obtained people/object data to select content enables more personalized content to be chosen.

[…]

 

  • urther, although not shown in this particular way, in some embodiments, the client device 134 may collect people/object data 136 using one or more sensors, as discussed above. Also, as previously discussed, the raw people/object data 136 may be processed by the sensing device 138, the client device 134, and/or a processing device 140 depending on the implementation. The people/object data 136 may include the data described above regarding FIG. 7 that may aid in recognizing objects, people, and/or patterns, as well as determining user preferences, mood, and so forth.
  • [0144]
    After the client device 134 is in possession of the people/object data 136, the client device 134 may use the classifier 144 to score each piece of content 132. In some embodiments, the classifier 144 may combine at least the people/object data 136, the scores provided by the server 67 for the content 132, or both, to determine a final score for each piece of content 132 (process block 216), which will be discussed in more detail below.
  • [0145]
    The client device 134 may select at least one piece of content 132 to display based on the scores (process block 218). That is, the client device 134 may select the content 132 with the highest score as determined by the classifier 144 to display. However, in some embodiments, where none of the content 132 generate a score above a threshold amount, no content 132 may be selected. In those embodiments, the client device 134 may not present any content 132. However, when at least one item of content 132 scores above the threshold amount and is selected, then the client device 134 may communicate the selected content 132 to a user of the client device 134 (process block 220) and track user interaction with the content 132 (process block 222). It should be noted that when more than one item of content 132 score above the threshold amount, then the item of content 132 with the highest score may be selected. The client device 134 may use the tracked user interaction and conversions to continuously train the classifier 144 to ensure that the classifier 144 stays up to date with the latest user preferences.
  • [0146]
    It should be noted that, in some embodiments, the processing device 140 may receive the content 132 from the server 67 instead of, or in addition to, the client device 134. In embodiments where the processing device 140 receives the content 132, the processing device 140 may perform the classification of the content 132 using a classifier 144 similar to the client device 134 and the processing device 140 may select the content 132 with the highest score as determined by the classifier 144. Once selected, the processing device 140 may send the selected content 132 to the client device 134, which may communicate the selected content 132 to a user.
  • […]
  • The process 230 may include training one or more models of the classifier 144 with people/object data 136, locale 146, demographics 148, search history 150, scores from the server 67, labels 145, and so forth. As previously discussed, the classifier 144 may include a support vector machine (SVM) that uses supervised learning models to classify the content 132 into one of two groups (e.g., binary classification) based on recognized patterns using the people/object data 136, locale 146, demographics 148, search history 150, scores from the server 67, and the labels 145 for the two groups of “show” or “don’t show.”

 

Source: US20160260135A1 – Privacy-aware personalized content for the smart home – Google Patents

 

They have thought up around 140 ways that this can be used…

Paralyzed Individuals Operate Tablet with Brain Implant

One user played Beethoven’s “Ode to Joy” on an Android tablet piano app and later bought some groceries online. Another sent a few texts and then checked the weather forecast. A third browsed through some videos before firing up Stevie Nicks on Pandora.

They didn’t use their fingers to type commands or their voices to navigate the the interface.

They used their noggins, specifically the motor cortex region of their brains where a baby aspirin-size chip had been implanted as part of a new study

[…]

Each participant was asked to try out seven common apps on the tablet: email, chat, web browser, video sharing, music streaming, a weather program and a news aggregator. The researchers also asked the users if they wanted any additional apps, and subsequently added the keyboard app, grocery shopping on Amazon, and a calculator.

The participants made up to 22 point-and-click selections per minute and typed up to 30 characters per minute in email and text programs. What’s more, all three participants really enjoyed using the tablet, says Hochberg.

Source: Paralyzed Individuals Operate Tablet with Brain Implant – IEEE Spectrum

First ever plane with no moving parts takes flight

The first ever “solid state” plane, with no moving parts in its propulsion system, has successfully flown for a distance of 60 metres, proving that heavier-than-air flight is possible without jets or propellers.

The flight represents a breakthrough in “ionic wind” technology, which uses a powerful electric field to generate charged nitrogen ions, which are then expelled from the back of the aircraft, generating thrust.

The plane in flight
The plane in flight. Photograph: Nature Video/Youtube

Steven Barrett, an aeronautics professor at MIT and the lead author of the study published in the journal Nature, said the inspiration for the project came straight from the science fiction of his childhood. “I was a big fan of Star Trek, and at that point I thought that the future looked like it should be planes that fly silently, with no moving parts – and maybe have a blue glow. But certainly no propellers or turbines or anything like that. So I started looking into what physics might make flight with no moving parts possible, and came across a concept known as the ionic wind, with was first investigated in the 1920s.

“This didn’t make much progress in that time. It was looked at again in the 1950s, and researchers concluded that it couldn’t work for aeroplanes. But I started looking into this and went through a period of about five years, working with a series of graduate students to improve fundamental understanding of how you could produce ionic winds efficiently, and how that could be optimised.”

In the prototype plane, wires at the leading edge of the wing have 600 watts of electrical power pumped through them at 40,000 volts. This is enough to induce “electron cascades”, ultimately charging air molecules near the wire. Those charged molecules then flow along the electrical field towards a second wire at the back of the wing, bumping into neutral air molecules on the way, and imparting energy to them. Those neutral air molecules then stream out of the back of the plane, providing thrust.

The end result is a propulsion system that is entirely electrically powered, almost silent, and with a thrust-to-power ratio comparable to that achieved by conventional systems such as jet engines.

Source: First ever plane with no moving parts takes flight | Science | The Guardian

CV Compiler is a robot that fixes your resume to make you more competitive

Machine learning is everywhere now, including recruiting. Take CV Compiler, a new product by Andrew Stetsenko and Alexandra Dosii. This web app uses machine learning to analyze and repair your technical resume, allowing you to shine to recruiters at Google, Yahoo and Facebook.

The founders are marketing and HR experts who have a combined 15 years of experience in making recruiting smarter. Stetsenko founded Relocate.me and GlossaryTech while Dosii worked at a number of marketing firms before settling on CV Compiler.

The app essentially checks your resume and tells you what to fix and where to submit it. It’s been completely bootstrapped thus far and they’re working on new and improved machine learning algorithms while maintaining a library of common CV fixes.

“There are lots of online resume analysis tools, but these services are too generic, meaning they can be used by multiple professionals and the results are poor and very general. After the feedback is received, users are often forced to buy some extra services,” said Stetsenko. “In contrast, the CV Compiler is designed exclusively for tech professionals. The online review technology scans for keywords from the world of programming and how they are used in the resume, relative to the best practices in the industry.”

Source: CV Compiler is a robot that fixes your resume to make you more competitive | TechCrunch

Palm’s Ultra Tiny Phone Is an Absolute Snack

There’s just something about this phone. From the moment I laid eyes on this thing, it just kind of made me happy. It’s small and adorable like a newborn puppy, and despite how petite it appears it photos, it looks and feels even smaller in person. And I’m not the only one that had this reaction. When I brought it into the office, people crowded around marveled. One person cooed at it, another said, “it’s perfect,” while a third remarked that this is the exact sort of thing they’d wished someone would make for years.

From a crowd of tech bloggers, even I was taken aback with its reception. Size alone isn’t what makes this handset remarkable. In part what makes the device exciting is that it’s the rebirth of Palm, the same company that made big ‘ole PDAs and the ill-fated Palm Pre. Maybe more interestingly, Palm’s new phone also envisions an entirely different way of using and living with tech.

For something so small, it’s pretty mysterious, and I’m actually not even entirely sure what to call it. The company that makes it is Palm, but what about the device itself? Is it just Phone with a capital P, or is it the Palm Palm as its comical listing on Verizon’s website suggests? For now, I’ve been going with Baby Phone or the just the mononymous Palm, because like Grimes, Wario, and Rasputin, this gadget is cool enough to need only a single name.

Don’t you just want to squeeze its cheeks?
Photo: Sam Rutherford (Gizmodo)

Now let’s talk about size. I don’t mean its actual dimensions—which are about the same as a credit card—but the reason behind why it’s so tiny. Recently, a lot of companies have been pushing the idea of digital wellness, with Google and Apple adding features to Android and iOS that help you track how much time you spend on your phone. That’s all fine, but in some ways, buying an $800 phone and then putting restrictions on it is like buying an Aston Martin and never driving it faster than 55 mph.

So instead of spending a lot of money on a phone that constantly tempts you, why not get something small and nimble that can still handle traditional smartphone duties, but doesn’t also ruin your life. That’s the real inspiration behind the Palm’s pint-sized body and mini display. You’re supposed to pull it out, check the screen real quick, and then put it away.

As small as the Palm looks, it feels even tinier in real life.
Photo: Sam Rutherford (Gizmodo)

The Palm is a more straightforward way to fight smartphone addiction, and while it does quite well at replacing your regular phone, it has some quirks and a few sore spots you should know about. I’m going to break things down The Good, the Bad, and the Ugly style.

Source: Palm’s Ultra Tiny Phone Is an Absolute Snack

Azure, Office 365 go super-secure: Multi-factor auth borked in Europe, Asia, USA – > 6 hour outage from MS – yay!

Happy Monday, everyone! Azure Multi-Factor Authentication is struggling, meaning that some users with the functionality enabled are now super secure. And, er, locked out.

Microsoft confirmed that there were problems from 04:39 UTC with a subset of customers in Europe, the Americas, and Asia-Pacific experiencing “difficulties signing into Azure resources” such as the, er, little used Azure Active Directory, when Multi-Factor Authentication (MFA) is enabled.

Six hours later, and the problems are continuing.

The Office 365 health status page has reported that: “Affected users may be unable to sign in using MFA” and Azure’s own status page confirmed that there are “issues connecting to Azure resources” thanks to the borked MFA.

Source: Azure, Office 365 go super-secure: Multi-factor auth borked in Europe, Asia, USA • The Register

Cloud!

Dutch Gov sees Office 365 spying on you, sending your texts to US servers without recourse or knowledge

Uit het rapport van de Nederlandse overheid blijkt dat de telemetrie-functie van alle Office 365 en Office ProPlus-applicaties onder andere e-mail-onderwerpen en woorden/zinnen die met behulp van de spellingschecker of vertaalfunctie zijn geschreven worden doorgestuurd naar systemen in de Verenigde Staten.

Dit gaat zelfs zo ver dat, als een gebruiker meerdere keren achter elkaar op de backspace-knop drukt, de telemetrie-functie zowel de zin voor de aanpassing al die daarna verzamelt en doorstuurt. Gebruikers worden hiervan niet op de hoogte gebracht en hebben geen mogelijkheid deze dataverzameling te stoppen of de verzamelde data in te zien.

De Rijksoverheid heeft dit onderzoek gedaan in samenwerking met Privacy Company. “Microsoft mag deze tijdelijke, functionele gegevens niet opslaan, tenzij de bewaring strikt noodzakelijk is, bijvoorbeeld voor veiligheidsdoeleinden,” schrijft Sjoera Nas van de Privacy Company in een blogpost.

Source: Je wordt bespied door Office 365-applicaties – Webwereld

LastPass Five-hour outage drives netizens bonkers

LastPass’s cloud service suffered a five-hour outage today that left some people unable to use the password manager to log into their internet accounts.

Its makers said offline mode wasn’t affected – and that only its cloud-based password storage fell offline – although some Twitter folks disagreed. One claimed to be unable to log into any accounts whether in “local or remote” mode of the password manager, while another couldn’t access their local vault.

The solution, apparently, was to disconnect from the network. That forced LastPass to use account passwords cached on the local machine, rather than pull down credentials from its cloud-hosted password vaults. Folks store login details remotely using LastPass so they can be used and synchronized across multiple devices, backed up in the cloud, shared securely with colleagues, and so on.

The problems first emerged at 1408 UTC on November 20, with netizens reporting an “intermittent connectivity issue” when trying to use LastPass to fill in their passwords to log into their internet accounts. Unlucky punters were, therefore, unable to get into their accounts because LastPass couldn’t cough up the necessary passwords from its cloud.

The software’s net admins worked fast, according to the organisation’s status page. Within seven minutes of trouble, the outfit posted: “The Network Operations Center have identified the issue and are working to resolve the issue.”

The biz also reassured users that there was no security vulnerability, exploit, nor hack attack involved:

Connectivity is a recurrent theme in LastPass outages: in May, LogMeIn, the developers behind LastPass, suffered a DNS error in the UK that locked Blighty out of the service.

The service returned at nearly 2000 UTC today, when the status team posted: “We have confirmed that internal tests are working fine and LastPass is operational. We are continuing to monitor the situation to ensure there are no further issues.”

Source: LastPass? More like lost pass. Or where the fsck has it gone pass. Five-hour outage drives netizens bonkers • The Register

Cloud!

Human images from world’s first total-body scanner unveiled

EXPLORER, the world’s first medical imaging scanner that can capture a 3-D picture of the whole human body at once, has produced its first scans.

The brainchild of UC Davis scientists Simon Cherry and Ramsey Badawi, EXPLORER is a combined (PET) and X-ray computed tomography (CT) that can image the entire body at the same time. Because the machine captures radiation far more efficiently than other scanners, EXPLORER can produce an image in as little as one second and, over time, produce movies that can track specially tagged drugs as they move around the entire body.

The developers expect the technology will have countless applications, from improving diagnostics to tracking disease progression to researching new drug therapies.

The first images from scans of humans using the new device will be shown at the upcoming Radiological Society of North America meeting, which starts on Nov. 24th in Chicago. The scanner has been developed in partnership with Shanghai-based United Imaging Healthcare (UIH), which built the system based on its latest technology platform and will eventually manufacture the devices for the broader healthcare market.

“While I had imagined what the images would look like for years, nothing prepared me for the incredible detail we could see on that first scan,” said Cherry, distinguished professor in the UC Davis Department of Biomedical Engineering. “While there is still a lot of careful analysis to do, I think we already know that EXPLORER is delivering roughly what we had promised.

EXPLORER image showing glucose metabolism throughout the entire human body. This is the first time a medical imaging scanner has been able to capture a 3D image of the entire human body simultaneously. Credit: UC Davis and Zhongshan Hospital, Shanghai

Badawi, chief of Nuclear Medicine at UC Davis Health and vice-chair for research in the Department of Radiology, said he was dumbfounded when he saw the first images, which were acquired in collaboration with UIH and the Department of Nuclear Medicine at the Zhongshan Hospital in Shanghai.

“The level of detail was astonishing, especially once we got the reconstruction method a bit more optimized,” he said. “We could see features that you just don’t see on regular PET scans. And the dynamic sequence showing the radiotracer moving around the body in three dimensions over time was, frankly, mind-blowing. There is no other device that can obtain data like this in humans, so this is truly novel.”

Source: Human images from world’s first total-body scanner unveiled

Talk about a cache flow problem: This JavaScript can snoop on other browser tabs to work out what you’re visiting

Computer science boffins have demonstrated a side-channel attack technique that bypasses recently-introduced privacy defenses, and makes even the Tor browser subject to tracking. The result: it is possible for malicious JavaScript in one web browser tab to spy on other open tabs, and work out which websites you’re visiting.

This information can be used to target adverts at you based on your interests, or otherwise work out the kind of stuff you’re into and collect it in safe-keeping for future reference.

Researchers Anatoly Shusterman, Lachlan Kang, Yarden Haskal, Yosef Meltser, Prateek Mittal, Yossi Oren, Yuval Yarom – from Ben-Gurion University of the Negev in Israel, the University of Adelaide in Australia, and Princeton University in the US – have devised a processor cache-based website fingerprinting attack that uses JavaScript for gathering data to identify visited websites.

The technique is described in a paper recently distributed through ArXiv called “Robust Website Fingerprinting Through the Cache Occupancy Channel.”

“The attack we demonstrated compromises ‘human secrets’: by finding out which websites a user accesses, it can teach the attacker things like a user’s sexual orientation, religious beliefs, political opinions, health conditions, etc.,” said Yossi Oren (Ben-Gurion University) and Yuval Yarom (University of Adelaide) in an email to The Register this week.

It’s thus not as serious as a remote attack technique that allows the execution of arbitrary code or exposes kernel memory, but Oren and Yarom speculate that there may be ways their browser fingerprinting method could be adapted to compromise computing secrets like encryption keys or vulnerable installed software.

Source: Talk about a cache flow problem: This JavaScript can snoop on other browser tabs to work out what you’re visiting • The Register

Facebook files patent to find out more about you by looking at the background items in your pictures and pictures you are tagged in

An online system predicts household features of a user, e.g., household size and demographic composition, based on image data of the user, e.g., profile photos, photos posted by the user and photos posted by other users socially connected with the user, and textual data in the user’s profile that suggests relationships among individuals shown in the image data of the user. The online system applies one or more models trained using deep learning techniques to generate the predictions. For example, a trained image analysis model identifies each individual depicted in the photos of the user; a trained text analysis model derive household member relationship information from the user’s profile data and tags associated with the photos. The online system uses the predictions to build more information about the user and his/her household in the online system, and provide improved and targeted content delivery to the user and the user’s household.

Source: United States Patent Application: 0180332140

Most ATMs can be hacked in under 20 minutes

An extensive testing session carried out by bank security experts at Positive Technologies has revealed that most ATMs can be hacked in under 20 minutes, and even less, in certain types of attacks.

Experts tested ATMs from NCR, Diebold Nixdorf, and GRGBanking, and detailed their findings in a 22-page report published this week.

The attacks they tried are the typical types of exploits and tricks used by cyber-criminals seeking to obtain money from the ATM safe or to copy the details of users’ bank cards (also known as skimming).

atm-network-attack.png
Image: Positive Technologies

Experts said that 85 percent of the ATMs they tested allowed an attacker access to the network. The research team did this by either unplugging and tapping into Ethernet cables, or by spoofing wireless connections or devices to which the ATM usually connected to.

Researchers said that 27 percent of the tested ATMs were vulnerable to having their processing center communications spoofed, while 58 percent of tested ATMs had vulnerabilities in their network components or services that could be exploited to control the ATM remotely.

Furthermore, 23 percent of the tested ATMs could be attacked and exploited by targeting other network devices connected to the ATM, such as, for example, GSM modems or routers.

“Consequences include disabling security mechanisms and controlling output of banknotes from the dispenser,” researchers said in their report.

PT experts said that the typical “network attack” took under 15 minutes to execute, based on their tests.

atm-black-box-attack.png
Image: Positive Technologies

But in case ATM hackers were looking for a faster way in, researchers also found that Black Box attacks were the fastest, usually taking under 10 minutes to pull off.

A Black Box attack is when a hacker either opens the ATM case or drills a hole in it to reach the cable connecting the ATM’s computer to the ATM’s cash box (or safe). Attackers then connect a custom-made tool, called a Black Box, that tricks the ATM into dispensing cash on demand.

PT says that 69 percent of the ATMs they tested were vulnerable to such attacks and that on 19 percent of ATMs, there were no protections against Black Box attacks at all.

atm-exit-kiosk-mode-attack.png
Image: Positive Technologies

Another way through which researchers attacked the tested ATMs was by trying to exit kiosk mode –the OS mode in which the ATM interface runs in.

Researchers found that by plugging a device into one of the ATM’s USB or PS/2 interfaces, they could pluck the ATM from kiosk mode and run commands on the underlying OS to cash out money from the ATM safe.

The PT team says this attack usually takes under 15 minutes, and that 76 percent of the tested ATMs were vulnerable.

atm-hard-drive-attack.png
Image: Positive Technologies

Another attack, and the one that took the longest to pull off but yielded the highest results, was one during which researchers bypassed the ATM’s internal hard drive and booted from an external one.

PT experts said that 92 percent of the ATMs they tested were vulnerable. This happened because the ATMs either didn’t have a BIOS password, used one that was easy to guess, or didn’t use disk data encryption.

Researchers said that during their tests, which normally didn’t take more than 20 minutes, they changed the boot order in the BIOS, booted the ATM from their own hard drive, and made changes to the ATM’s normal OS on the legitimate hard drive, changes which could permit cash outs or ATM skimming operations.

atm-boot-mode-attack.png
Image: Positive Technologies

In another test, PT researchers also found that attackers with physical access to the ATM could restart the device and force it to boot into a safe/debug mode.

This, in turn, would allow the attackers access to various debug utilities or COM ports through which they could infect the ATM with malware.

The attack took under 15 minutes to execute, and researchers found that 42 percent of the ATMs they tested were vulnerable.

atm-card-data-transfer-attack.png
Image: Positive Technologies

Last but not least, the most depressing results came in regards to tests of how ATMs transmitted card data internally, or to the bank.

PT researchers said they were able to intercept card data sent between the tested ATMs and a bank processing center in 58 percent of the cases, but they were 100 percent successful in intercepting card data while it was processed internally inside the ATM, such as when it was transmitted from the card reader to the ATM’s OS.

This attack also took under 15 minutes to pull off. Taking into account that most real-world ATM attacks happen during the night and target ATMs in isolated locations, 20 minutes is more than enough for most criminal operations.

“More often than not, security mechanisms are a mere nuisance for attackers: our testers found ways to bypass protection in almost every case,” the PT team said. “Since banks tend to use the same configuration on large numbers of ATMs, a successful attack on a single ATM can be easily replicated at greater scale.”

The following ATMs were tested.

atms-tested.jpg

Source: Most ATMs can be hacked in under 20 minutes | ZDNet

Microsoft slips ads into Windows 10 Mail client – then U-turns so hard, it warps fabric of reality – Windows is an OS, not a service!

Microsoft was, and maybe still is, considering injecting targeted adverts into the Windows 10 Mail app.

The ads would appear at the top of inboxes of folks using the client without a paid-for Office 365 subscription, and the advertising would be tailored to their interests. Revenues from the banners were hoped to help keep Microsoft afloat, which banked just $16bn in profit in its latest financial year.

According to Aggiornamenti Lumia on Friday, folks using Windows Insider fast-track builds of Mail and Calendar, specifically version 11605.11029.20059.0, may have seen the ads in among their messages, depending on their location. Users in Brazil, Canada, Australia, and India were chosen as guinea pigs for this experiment.

A now-deleted FAQ on the Office.com website about the “feature” explained the advertising space would be sold off to help Microsoft “provide, support, and improve some of our products,” just like Gmail and Yahoo! Mail display ads.

Also, the advertising is targeted, by monitoring what you get up to with apps and web browsing, and using demographic information you disclose:

Windows generates a unique advertising ID for each user on a device. When the advertising ID is enabled, both Microsoft apps and third-party apps can access and use the advertising ID in much the same way that websites can access and use a unique identifier stored in a cookie. Mail uses this ID to provide more relevant advertising to you.

You have full control of Windows and Mail having access to this information and can turn off interest-based advertising at any time. If you turn off interest-based advertising, you will still see ads but they will no longer be as relevant to your interests.

Microsoft does not use your personal information, like the content of your email, calendar, or contacts, to target you for ads. We do not use the content in your mailbox or in the Mail app.

You can also close an ad banner by clicking on its trash can icon, or get rid of them completely by coughing up cash:

You can permanently remove ads by buying an Office 365 Home or Office 365 Personal subscription.

Here’s where reality is thrown into a spin, literally. Microsoft PR supremo Frank Shaw said a few hours ago, after the ads were spotted:

This was an experimental feature that was never intended to be tested broadly and it is being turned off.

Never intended to be tested broadly, and was shut down immediately, yet until it was clocked, had an official FAQ for it on Office.com, which was also hastily nuked from orbit, and was rolled out in highly populated nations. Talk about hand caught in the cookie jar.

Source: Microsoft slips ads into Windows 10 Mail client – then U-turns so hard, it warps fabric of reality • The Register

A 100,000-router botnet is feeding on a 5-year-old UPnP bug in Broadcom chips (lots of different routers have this chip!)

A recently discovered botnet has taken control of an eye-popping 100,000 home and small-office routers made from a range of manufacturers, mainly by exploiting a critical vulnerability that has remained unaddressed on infected devices more than five years after it came to light.

Researchers from Netlab 360, who reported the mass infection late last week, have dubbed the botnet BCMUPnP_Hunter. The name is a reference to a buggy implementation of the Universal Plug and Play protocol built into Broadcom chipsets used in vulnerable devices. An advisory released in January 2013 warned that the critical flaw affected routers from a raft of manufacturers, including Broadcom, Asus, Cisco, TP-Link, Zyxel, D-Link, Netgear, and US Robotics. The finding from Netlab 360 suggests that many vulnerable devices were allowed to run without ever being patched or locked down through other means.

Last week’s report documents 116 different types of devices that make up the botnet from a diverse group of manufacturers. Once under the attackers’ control, the routers connect to a variety of well-known email services. This is a strong indication that the infected devices are being used to send spam or other types of malicious mail.

Universal Plug and Play

UPnP is designed to make it easy for computers, printers, phones, and other devices to connect to local networks using code that lets them automatically discover each other. The protocol often eliminates the hassle of figuring out how to configure devices the first time they’re connected. But UPnP, as researchers have warned for years, often opens up serious holes inside the networks that use it. In some cases, UPnP bugs cause devices to respond to discovery requests sent from outside the network. Hackers can exploit the weakness in a way that allows them to take control of the devices. UPnP weaknesses can also allow hackers to bypass firewall protections.

Source: A 100,000-router botnet is feeding on a 5-year-old UPnP bug in Broadcom chips | Ars Technica

Can AI Create True Art?

just last month, AI-generated art arrived on the world auction stage under the auspices of Christie’s, proving that artificial intelligence can not only be creative but also produce world class works of art—another profound AI milestone blurring the line between human and machine.

Naturally, the news sparked debates about whether the work produced by Paris-based art collective Obvious could really be called art at all. Popular opinion among creatives is that art is a process by which human beings express some idea or emotion, filter it through personal experience and set it against a broader cultural context—suggesting then that what AI generates at the behest of computer scientists is definitely not art, or at all creative.

By artist #2 (see bottom of story for key). Credit: Artwork Commissioned by GumGum

The story raised additional questions about ownership. In this circumstance, who can really be named as author? The algorithm itself or the team behind it? Given that AI is taught and programmed by humans, has the human creative process really been identically replicated or are we still the ultimate masters?

AI VERSUS HUMAN

At GumGum, an AI company that focuses on computer vision, we wanted to explore the intersection of AI and art by devising a Turing Test of our own in association with Rutgers University’s Art and Artificial Intelligence Lab and Cloudpainter, an artificially intelligent painting robot. We were keen to see whether AI can, in fact, replicate the intent and imagination of traditional artists, and we wanted to explore the potential impact of AI on the creative sector.

By artist #3 (see bottom of story for key). Credit: Artwork Commissioned by GumGum

To do this, we enlisted a broad collection of diverse artists from “traditional” paint-on-canvas artists to 3-D rendering and modeling artists alongside Pindar Van Arman—a classically trained artist who has been coding art robots for 15 years. Van Arman was tasked with using his Cloudpainter machine to create pieces of art based on the same data set as the more traditional artists. This data set was a collection of art by 20th century American Abstract Expressionists. Then, we asked them to document the process, showing us their preferred tools and telling us how they came to their final work.

By artist #4 (see bottom of story for key). Credit: Artwork Commissioned by GumGum

Intriguingly, while at face value the AI artwork was indistinguishable from that of the more traditional artists, the test highlighted that the creative spark and ultimate agency behind creating a work of art is still very much human. Even though the Cloudpainter machine has evolved over time to become a highly intelligent system capable of making creative decisions of its own accord, the final piece of work could only be described as a collaboration between human and machine. Van Arman served as more of an “art director” for the painting. Although Cloudpainter made all of the aesthetic decisions independently, the machine was given parameters to meet and was programed to refine its results in order to deliver the desired outcome. This was not too dissimilar to the process used by Obvious and their GAN AI tool.

By artist #5 (see bottom of story for key). Credit: Artwork Commissioned by GumGum

Moreover, until AI can be programed to absorb inspiration, crave communication and want to express something in a creative way, the work it creates on its own simply cannot be considered art without the intention of its human masters. Creatives working with AI find the process to be more about negotiation than experimentation. It’s clear that even in the creative field, sophisticated technologies can be used to enhance our capabilities—but crucially they still require human intelligence to define the overarching rules and steer the way.

THERE’S AN ACTIVE ROLE BETWEEN ART AND VIEWER

How traditional art purveyors react to AI art on the world stage is yet to be seen, but in the words of Leandro Castelao—one of the artists we enlisted for the study—“there’s an active role between the piece of art and the viewer. In the end, the viewer is the co-creator, transforming, re-creating and changing.” This is a crucial point; when it’s difficult to tell AI art apart from human art, the old adage that beauty is in the eye of the beholder rings particularly true.

Source: Can AI Create True Art? – Scientific American Blog Network

AIs Are Getting Better At Playing Video Games…By Cheating

Earlier this year, researchers tried teaching an AI to play the original Sonic the Hedgehog as part of the The OpenAI Retro Contest. The AI was told to prioritize increasing its score, which in Sonic means doing stuff like defeating enemies and collecting rings while also trying to beat a level as fast as possible. This dogged pursuit of one particular definition of success led to strange results: In one case, the AI began glitching through walls in the game’s water zones in order to finish more quickly.

It was a creative solution to the problem laid out in front of the AI, which ended up discovering accidental shortcuts while trying to move right. But it wasn’t quite what the researchers had intended. One of researchers’ goals with machine-learning AIs in gaming is to try and emulate player behavior by feeding them large amounts of player generated data. In effect, the AI watches humans conduct an activity, like playing through a Sonic level, and then tries to do the same, while being able to incorporate its own attempts into its learning. In a lot of instances, machine learning AIs end up taking their directions literally. Instead of completing a variety of objectives, a machine-learning AI might try to take shortcuts that completely upend human beings’ understanding of how a game should be played.

GIF: OpenAI (Sonic )

Victoria Krakovna, a researcher on Google’s DeepMind AI project, has spent the last several months collecting examples like the Sonic one. Her growing collection has recently drawn new attention after being shared on Twitter by Jim Crawford, developer of the puzzle series Frog Fractions, among other developers and journalists. Each example includes what she calls “reinforcement learning agents hacking the reward function,” which results in part from unclear directions on the part of the programmers.

“While ‘specification gaming’ is a somewhat vague category, it is particularly referring to behaviors that are clearly hacks, not just suboptimal solutions,” she wrote in her initial blog post on the subject. “A classic example is OpenAI’s demo of a reinforcement learning agent in a boat racing game going in circles and repeatedly hitting the same reward targets instead of actually playing the game.”

Source: AIs Are Getting Better At Playing Video Games…By Cheating

Couple Who Ran retro ROM Site (with games you can’t buy any more) to Pay Nintendo $12 Million

Nintendo has won a lawsuit seeking to take two large retro-game ROM sites offline, on charges of copyright infringement. The judgement, made public today, ruled in Nintendo’s favour and states that the owners of the sites LoveROMS.com and LoveRETRO.co, will have to pay a total settlement of $12 million to Nintendo. The complaint was originally filed by the company in an Arizona federal court in July, and has since lead to a swift purge of self-censorship by popular retro and emulator ROM sites, who have feared they may be sued by Nintendo as well.

LoveROMS.com and LoveRETRO.co were the joint property of couple Jacob and Cristian Mathias, before Nintendo sued them for what they have called “brazen and mass-scale infringement of Nintendo’s intellectual property rights.” The suit never went to court; instead, the couple sought to settle after accepting the charge of direct and indirect copyright infringement. TorrentFreak reports that a permanent injunction, prohibiting them from using, sharing, or distributing Nintendo ROMs or other materials again in the future, has been included in the settlement. Additionally all games, game files, and emulators previously on the site and in their custody must be handed over to the Japanese game developer, along with a $12.23 million settlement figure. It is unlikely, as TorrentFreak have reported, that the couple will be obligated to pay the full figure; a smaller settlement has likely been negotiated in private.

Instead, the purpose of the enormous settlement amount is to act as a warning or deterrent to other ROM and emulator sites surviving on the internet. And it’s working.

Motherboard previously reported on the way in which Nintendo’s legal crusade against retro ROM and emulator sites is swiftly eroding a large chunk of retro gaming. The impact of this campaign on video games as a whole is potentially catastrophic. Not all games have been preserved adequately by game publishers and developers. Some are locked down to specific regions and haven’t ever been widely accessible.

The accessibility of video games and the gaming industry has always been defined and limited by economic boundaries. There are a multitude of reasons why retro games can’t be easily or reliably accessed by prospective players, and by wiping out ROM sites Nintendo is erasing huge chunks of gaming history. Limiting the accessibility of old retro titles to this extent will undoubtedly affect the future of video games, with classic titles that shaped modern games and gaming development being kept under lock and key by the monolithic hand of powerful game developers.

Since the filing of the suit in July EmuParadise, a haven for retro games and emulator titles, has shut down. Many other sites have followed suit.

Source: Couple Who Ran ROM Site to Pay Nintendo $12 Million – Motherboard

Wow, that’s a sure fire way to piss off your fans, Nintendo!