Massive Oklahoma Government Data Leak Exposes 7 Years of FBI Investigations – unsecured rsync

Last December, a whopping 3 terabytes of unprotected data from the Oklahoma Securities Commission was uncovered by Greg Pollock, a researcher with cybersecurity firm UpGuard. It amounted to millions of files, many on sensitive FBI investigations, all of which were left wide open on a server with no password, accessible to anyone with an internet connection, Forbes can reveal.

“It represents a compromise of the entire integrity of the Oklahoma department of securities’ network,” said Chris Vickery, head of research at UpGuard, which is revealing its technical findings on Wednesday. “It affects an entire state level agency. … It’s massively noteworthy.”

A breach back to the ’80s

The Oklahoma department regulates all financial securities business happening in the state. It may be little surprise there was leaked information on FBI cases. But the amount and variety of data astonished Vickery and Pollock.

Vickery said the FBI files contained “all sorts of archive enforcement actions” dating back seven years (the earliest file creation date was 2012). The documents included spreadsheets with agent-filled timelines of interviews related to investigations, emails from parties involved in myriad cases and bank transaction histories. There were also copies of letters from subjects, witnesses and other parties involved in FBI investigations.

[…]

Just as concerning, the leak also included email archives stretching back 17 years, thousands of social security numbers and data from the 1980s onwards.

[…]

After Vickery and Pollock disclosed the breach, they informed the commission it had mistakenly left open what’s known as an rsync server. Such servers are typically used to back up large batches of data and, if that information is supposed to be secure, should be protected by a username and password.

There were other signs of poor security within the leaked data. For instance, passwords for computers on the Oklahoma government’s network were also revealed. They were “not complicated,” quipped Chris Vickery, head of research on the UpGuard team. In one of the more absurd choices made by the department, it had stored an encrypted version of one document in the same file folder as a decrypted version. Passwords for remote access to agency computers were also leaked.

This is the latest in a series of incidents involving rsync servers. In December, UpGuard revealed that Level One Robotics, a car manufacturing supply chain company, was exposing information in the same way as the Oklahoma government division. Companies with data exposed in that event included Volkswagen, Chrysler, Ford, Toyota, General Motors and Tesla.

For whatever reason, governments and corporate giants alike still aren’t aware how easy it is for hackers to constantly scan the Web for such leaks. Starting with basics like passwords would help them keep their secrets secure.

Source: Massive Oklahoma Government Data Leak Exposes 7 Years of FBI Investigations

Let’s Encrypt ends TLS-SNI-01 validation support

Let’s Encrypt allows subscribers to validate domain control using any one of a few different validation methods. For much of the time Let’s Encrypt has been operating, the options were “DNS-01”, “HTTP-01”, and “TLS-SNI-01”. We recently introduced the “TLS-ALPN-01” method. Today we are announcing that we will end all support for the TLS-SNI-01 validation method on February 13, 2019.

In January of 2018 we disabled the TLS-SNI-01 domain validation method for most subscribers due to a vulnerability enabled by some shared hosting infrastructure 1.1k. We provided temporary exceptions for renewals and for a small handful of hosting providers in order to smooth the transition to DNS-01 and HTTP-01 validation methods. Most subscribers are now using DNS-01 or HTTP-01.

If you’re still using TLS-SNI-01, please switch to one of the other validation methods as soon as possible. We will also attempt to contact subscribers who are still using TLS-SNI-01, if they provided contact information.

We apologize for any inconvenience but we believe this is the right thing to do for the integrity of the Web PKI.

https://community.letsencrypt.org/t/february-13-2019-end-of-life-for-all-tls-sni-01-validation-support/74209

Famous freak wave recreated in laboratory mirrors Hokusai’s ‘Great Wave’

A team of researchers based at the Universities of Oxford and Edinburgh have recreated for the first time the famous Draupner freak wave measured in the North Sea in 1995.

The Draupner wave was one of the first confirmed observations of a freak wave in the ; it was observed on the 1st of January 1995 in the North Sea by measurements made on the Draupner Oil Platform. Freak waves are unexpectedly large in comparison to surrounding waves. They are difficult to predict, often appearing suddenly without warning, and are commonly attributed as probable causes for maritime catastrophes such as the sinking of large ships.

The team of researchers set out to reproduce the Draupner wave under laboratory conditions to understand how this freak wave was formed in the ocean. They successfully achieved this reconstruction by creating the wave using two smaller wave groups and varying the crossing angle – the angle at which the two groups travel.

Dr. Mark McAllister at the University of Oxford’s Department of Engineering Science said: “The measurement of the Draupner wave in 1995 was a seminal observation initiating many years of research into the physics of and shifting their standing from mere folklore to a credible real-world phenomenon. By recreating the Draupner wave in the lab we have moved one step closer to understanding the potential mechanisms of this phenomenon.”

It was the crossing angle between the two smaller groups that proved critical to the successful reconstruction. The researchers found it was only possible to reproduce the freak wave when the crossing angle between the two groups was approximately 120 degrees.

When waves are not crossing, wave breaking limits the height that a wave can achieve. However, when waves cross at large angles, wave breaking behaviour changes and no longer limits the height a wave can achieve in the same manner.

Prof Ton van den Bremer at the University of Oxford said: “Not only does this laboratory observation shed light on how the famous Draupner wave may have occurred, it also highlights the nature and significance of wave breaking in crossing sea conditions. The latter of these two findings has broad implications, illustrating previously unobserved wave breaking behaviour, which differs significantly from current state-of-the-art understanding of ocean wave breaking.”

To the researchers’ amazement, the wave they created bore an uncanny resemblance to “The Great Wave off Kanagawa’ – also known as “The Great Wave’ – a woodblock print published in the early 1800s by the Japanese artist Katsushika Hokusai. Hokusai’s image depicts an enormous wave threatening three fishing boats and towers over Mount Fuji which appears in the background. Hokusai’s wave is believed to depict a freak, or ‘rogue,” wave.

The laboratory-created freak wave also bears strong resemblances with photographs of freak waves in the ocean.

The researchers hope that this study will lay the groundwork for being able to predict these potentially catastrophic and hugely damaging waves that occur suddenly in the ocean without warning.

Read more at: https://phys.org/news/2019-01-famous-freak-recreated-laboratory-mirrors.html#jCp

Source: Famous freak wave recreated in laboratory mirrors Hokusai’s ‘Great Wave’

TAUS – machine matching for better AI translations

TAUS, the language data network, is an independent and neutral industry organization. We develop communities through a program of events and online user groups and by sharing knowledge, metrics and data that help all stakeholders in the translation industry develop a better service. We provide data services to buyers and providers of language and translation services.

The shared knowledge and data help TAUS members decide on effective localization strategies. The metrics support more efficient processes and the normalization of quality evaluation. The data lead to improved translation automation.

TAUS develops APIs that give members access to services like DQF, the DQF Dashboard and the TAUS Data Market through their own translation platforms and tools. TAUS metrics and data are already built in to most of the major translation technologies.

Source: Mission – TAUS

Online casino group leaks information on 108 million bets, including winner personal details

An online casino group has leaked information on over 108 million bets, including details about customers’ personal information, deposits, and withdrawals, ZDNet has learned.

The data leaked from an ElasticSearch server that was left exposed online without a password, Justin Paine, the security researcher who discovered the server, told ZDNet.

ElasticSearch is a portable, high-grade search engine that companies install to improve their web apps’ data indexing and search capabilities. Such servers are usually installed on internal networks and are not meant to be left exposed online, as they usually handle a company’s most sensitive information.

Last week, Paine came across one such ElasticSearch instance that had been left unsecured online with no authentication to protect its sensitive content. From a first look, it was clear to Paine that the server contained data from an online betting portal.

Despite being one server, the ElasticSearch instance handled a huge swathe of information that was aggregated from multiple web domains, most likely from some sort of affiliate scheme, or a larger company operating multiple betting portals.

After an analysis of the URLs spotted in the server’s data, Paine and ZDNet concluded that all domains were running online casinos where users could place bets on classic cards and slot games, but also other non-standard betting games.

Some of the domains that Paine spotted in the leaky server included kahunacasino.com, azur-casino.com, easybet.com, and viproomcasino.net, just to name a few.

After some digging around, some of the domains were owned by the same company, but others were owned by companies located in the same building at an address in Limassol, Cyprus, or were operating under the same eGaming license number issued by the government of Curacao –a small island in the Carribean– suggesting that they were most likely operated by the same entity.

The user data that leaked from this common ElasticSearch server included a lot of sensitive information, such as real names, home addresses, phone numbers, email addresses, birth dates, site usernames, account balances, IP addresses, browser and OS details, last login information, and a list of played games.

A very small portion of the redacted user data leaked by the server
A very small portion of the redacted user data leaked by the server

Furthermore, Paine also found roughly 108 million records containing information on current bets, wins, deposits, and withdrawals. Data on deposits and withdrawals also included payment card details.

A very small portion of the redacted transaction data leaked by the server
A very small portion of the redacted transaction data leaked by the server

The good news is that the payment card details indexed in the ElasticSearch server were partially redacted, and they were not exposing the user’s full financial details.

The bad news is that anyone who found the database would have known the names, home addresses, and phone numbers of players who recently won large sums of money and could have used this information to target users as part of scams or extortion schemes.

ZDNet reached out with emails to all the online portals whose data Paine identified in the leaky server. At the time of writing, we have not received any response from any of the support teams we contacted last week, but today, the leaky server went offline and is not accessible anymore.

Source: Online casino group leaks information on 108 million bets, including user details | ZDNet

Google fined $57 million by French data privacy body for hiding terms and forcing users to accept intrusion or lose access

Google has been hit by a €50 million ($57 million) fine by French data privacy body CNIL (National Data Protection Commission) for failure to comply with the EU’s General Data Protection Regulation (GDPR) regulations.

The CNIL said that it was fining Google for “lack of transparency, inadequate information and lack of valid consent regarding the ads personalization,” according to a press release issued by the organization. The news was first reported by the AFP.

[…]

The crux of the complaints leveled at Google is that it acted illegally by forcing users to accept intrusive terms or lose access to the service. This “forced consent,” it’s argued, runs contrary to the principles set out by the GDPR that users should be allowed to choose whether to allow companies to use their data. In other words, technology companies shouldn’t be allowed to adopt a “take it or leave it” approach to getting users to agree to privacy-intruding terms and conditions.

[…]

The watchdog found two core privacy violations. First, it observed that the visibility of information relating to how Google processes data, for how long it stores it, and the kinds of information it uses to personalize advertisements, is not easy to access. It found that this information was “excessively disseminated across several documents, with buttons and links on which it is required to click to access complementary information.”

So in effect, the CNIL said there was too much friction for users to find the information they need, requiring up to six separate actions to get to the information. And even when they find the information, it was “not always clear nor comprehensive.” The CNIL stated:

Users are not able to fully understand the extent of the processing operations carried out by Google. But the processing operations are particularly massive and intrusive because of the number of services offered (about twenty), the amount and the nature of the data processed and combined. The restricted committee observes in particular that the purposes of processing are described in a too generic and vague manner, and so are the categories of data processed for these various purposes.

Secondly, the CNIL said that it found that Google does not “validly” gain user consent for processing their data to use in ads personalization. Part of the problem, it said, is that the consent it collects is not done so through specific or unambiguous means — the options involve users having to click additional buttons to configure their consent, while too many boxes are pre-selected and require the user to opt out rather than opt in. Moreover, Google, the CNIL said, doesn’t provide enough granular controls for each data-processing operation.

As provided by the GDPR, consent is ‘unambiguous’ only with a clear affirmative action from the user (by ticking a non-pre-ticked box for instance).

What the CNIL is effectively referencing here is dark pattern design, which attempts to encourage users into accepting terms by guiding their choices through the design and layout of the interface. This is something that Facebook has often done too, as it has sought to garner user consent for new features or T&Cs.

Source: Google fined $57 million by French data privacy body | VentureBeat

Torrent Paradise Creates Decentralized ‘Pirate Bay’ With IPFS

The BitTorrent protocol has a decentralized nature but the ecosystem surrounding it has some weak spots. Torrent sites, for example, use centralized search engines which are prone to outages and takedowns. Torrent-Paradise tackles this problem with IPFS, a searchable torrent indexer that’s shared by the people.

IPFS, short for InterPlanetary File System, has been around for a few years now.

While the name sounds alien to most people, it has a growing userbase among the tech-savvy.

In short, IPFS is a decentralized network where users make files available among each other. If a website uses IPFS, it is served by a “swarm” of people, much like BitTorrent users do when a file is shared.

The advantage of this system is that websites can become completely decentralized. If a website or other resource is hosted with IPFS, it remains accessible as long as the computer of one user who “pinned” it remains online.

The advantages of IPFS are clear. It allows archivists, content creators, researchers, and many others to distribute large volumes of data over the Internet. It’s censorship resistant and not vulnerable to regular hosting outages.

Source: Torrent Paradise Creates Decentralized ‘Pirate Bay’ With IPFS – TorrentFreak