A new 3D-printing technique could render a three-dimensional object in minutes instead of hours—at up to 100 times current speeds. The experimental approach uses a vat of resin and some clever tricks with UV and blue LED lights (no lasers needed) to accelerate the printing process.
The technique looks almost like a time-reverse film loop of an object dissolving in a reservoir of acid. But instead of acid, this reservoir contains a specially-designed resin that hardens when exposed to a particular shade of blue light. Crucially, that hardening (the technical term is polymerization) does not take place in the presence of a certain wavelength of UV light.
The resin is also particularly absorbent at the wavelengths of both the blue and UV light. So the intensity of UV or blue light going in translates directly to the depth to which light will penetrate into the resin bath. The brighter the light beam, the further it penetrates and the further its effects (whether inhibiting polymerization in the case of UV light, or causing it in the case of blue light) will be felt in the bath along that particular light path.
Timothy Scott, associate professor of chemical engineering at the University of Michigan, says the way to get a 3D-printed object out of this process is to send UV light through a glass-bottomed basin of resin. Then, at the same time, through that same glass window, send patterns of bright and dim blue light.
If this printing process used only the blue light, it would immediately harden the first bit of resin it encounters in the basin—the stuff just inside the glass. And so each successive layer of the object to be printed would need to be scraped or pulled off the window’s surface—a time-consuming and potentially destructive process.
Photo: Evan Dougherty/University of MichiganA new way to print 3D objects uses two lights to solidify a resin, and can create complex shapes at 100 times the speed of conventional 3D printers.
“We use the [UV] wavelength to prevent the resin from polymerizing against the projection window,” Scott says. “But we can change the intensity of the inhibiting wavelength, that in turn can thicken up…the region that doesn’t polymerize. We can go to hundreds of microns comfortably, approaching or even exceeding a millimeter, so that’s getting quite thick. We can do that across not only the entire region of our bath, but we can do it selectively. By, again, patterning the intensity that we’re projecting into the vat.”
Which is why the UV light, perhaps the key innovation of the new research, potentially streamlines the entire light-resin 3D-printing process, also called 3D stereolithography.
To be clear, other 3D-stereolithography printing processes and even startup companies are out there in the world. What’s new with the Michigan group’s research (published in Science Advances earlier this month) is the UV light inhibitor that not only prevents the hardened resin from sticking to the window but also can be used in concert with the blue light to sculpt 3D surfaces and contours of hardened resin in the bath.
In a sense, Scott says, the new stereolithography process is really one of the very first truly 3D printing processes—in that it prints not just a series of single 2D layers but rather entire 3D wedges of material in one pass.
“That is straight-up unique, the ability to pattern a volume,” Scott says. “Patterning in 2D is easy, patterning in 3D is nontrivial.”
Now, we introduce our StarCraft II program AlphaStar, the first Artificial Intelligence to defeat a top professional player. In a series of test matches held on 19 December, AlphaStar decisively beat Team Liquid’s Grzegorz “MaNa” Komincz, one of the world’s strongest professional StarCraft players, 5-0, following a successful benchmark match against his team-mate Dario “TLO” Wünsch. The matches took place under professional match conditions on a competitive ladder map and without any game restrictions.
Although there have been significant successes in video games such as Atari, Mario, Quake III Arena Capture the Flag, and Dota 2, until now, AI techniques have struggled to cope with the complexity of StarCraft. The best results were made possible by hand-crafting major elements of the system, imposing significant restrictions on the game rules, giving systems superhuman capabilities, or by playing on simplified maps. Even with these modifications, no system has come anywhere close to rivalling the skill of professional players. In contrast, AlphaStar plays the full game of StarCraft II, using a deep neural network that is trained directly from raw game data by supervised learning and reinforcement learning.
HMRC’s database of Brits’ voiceprints has grown by 2 million since June – but campaign group Big Brother Watch has claimed success as 160,000 people turned the taxman’s requests down.
The Voice ID scheme, which requires taxpayers to say a key phrase that is recorded to create a digital signature, was introduced in January 2017. In the 18 months that followed, HMRC scooped up some 5.1 million people’s voiceprints this way.
Since then, another 2 million records have been collected, according to a Freedom of Information request from Big Brother Watch.
That is despite the group having challenged the lawfulness of the system in June 2018, arguing that users hadn’t been given enough information on the scheme, how to opt in or out, or details on when or how their data would be deleted.
Under the GDPR, there are certain demands on organisations that process biometric data. These require a person to give “explicit consent” that is “freely given, specific, informed and unambiguous”.
Off the back of the complaint, the Information Commissioner’s Office launched an investigation, and Big Brother Watch said the body would soon announce what action it will take.
Meanwhile, HMRC has rejigged the recording so it offers callers a clear way to opt out of the scheme – previously, as perm sec Jon Thompson admitted in September, it was not clear how users could do this.
Big Brother Watch said that this, and the publicity around the VoiceID scheme, has led to a “backlash” as people call on HMRC to delete their Voice IDs. FoI responses show 162,185 people have done so to date.
“It is a great success for us that HMRC has finally allowed taxpayers to delete their voiceprints and that so many thousands of people are reclaiming their rights by getting their Voice IDs deleted,” said the group’s director, Silkie Carlo.
In a demonstration of “computational periscopy” a US team at Boston University showed they could see details of objects hidden from view by analysing shadows they cast on a nearby wall.
Vivek Goyal, an electrical engineer at the university, said that while the work had clear implications for surveillance he hoped it would lead to robots that could navigate better and boost the safety of driverless cars.
He said: “I’m not especially excited by surveillance, I don’t want to be doing creepy things, but being able to see that there’s a child on the other side of a parked car, or see a little bit around the corner of an intersection could have a significant impact on safety.”
The problem of how to see round corners has occupied modern researchers for at least a decade. And while scientists have made good progress in the field, the equipment used so far has been highly specialised and expensive.
In the latest feat, Goyal and his team used a standard digital camera and a mid-range laptop. “We didn’t use any sophisticated hardware. This is just an ordinary camera and we are all carrying these around in our pockets,” he said.
The researchers, writing in the journal Nature, describe how they pieced together hidden scenes by pointing the digital camera at the vague shadows they cast on a nearby wall. If the wall had been a mirror the task would have been easy, but a matt wall scatters light in all directions, so the reflected image is nothing but a blur. Goyal said: “In essence, computation can turn a matt wall into a mirror.”
They found that when an object blocked part of the hidden scene, their algorithms could use the combination of light and shade at different points on the wall to reconstruct what lay round the corner. In tests, the program pieced together hidden images of video game characters – including details such as their eyes and mouths – along with coloured strips and the letters “BU”.
Given the relative simplicity of the program and equipment, Goyal believes it could be possible for humans to learn the same trick. In a draft blog written for Nature, he said: “It is even conceivable for humans to be able to learn to see around corners with their own eyes; it does not require anything superhuman.”
If you bled when you brushed your teeth this morning, you might want to get that seen to. We may finally have found the long-elusive cause of Alzheimer’s disease: Porphyromonas gingivalis, the key bacteria in chronic gum disease.
That’s bad, as gum disease affects around a third of all people. But the good news is that a drug that blocks the main toxins of P. gingivalis is entering major clinical trials this year, and research published today shows it might stop and even reverse Alzheimer’s. There could even be a vaccine.
Alzheimer’s is one of the biggest mysteries in medicine. As populations have aged, dementia has skyrocketed to become the fifth biggest cause of death worldwide. Alzheimer’s constitutes some 70 per cent of these cases and yet, we don’t know what causes it.
Bacteria in the brain
The disease often involves the accumulation of proteins called amyloid and tau in the brain, and the leading hypothesis has been that the disease arises from defective control of these two proteins.
Bacteria involved in gum disease and other illnesses have been found after death in the brains of people who had Alzheimer’s, but until now, it hasn’t been clear whether these bacteria caused the disease or simply got in via brain damage caused by the condition.
Gum disease link
Multiple research teams have been investigating P. gingivalis, and have so far found that it invades and inflames brain regions affected by Alzheimer’s; that gum infections can worsen symptoms in mice genetically engineered to have Alzheimer’s; and that it can cause Alzheimer’s-like brain inflammation, neural damage, and amyloid plaques in healthy mice.
“When science converges from multiple independent laboratories like this, it is very compelling,” says Casey Lynch of Cortexyme, a pharmaceutical firm in San Francisco, California.
In new study, Cortexyme have now reported finding the toxic enzymes – called gingipains – that P. gingivalis uses to feed on human tissue in 96 per cent of the 54 Alzheimer’s brain samples they looked at, and found the bacteria themselves in all three Alzheimer’s brains whose DNA they examined.
The bacteria and its enzymes were found at higher levels in those who had experienced worse cognitive decline, and had more amyloid and tau accumulations. The team also found the bacteria in the spinal fluid of living people with Alzheimer’s, suggesting that this technique may provide a long-sought after method of diagnosing the disease.
The Victoria Police are the primary law enforcement agency of Victoria, Australia. With over 16,000 vehicles stolen in Victoria this past year — at a cost of about $170 million — the police department is experimenting with a variety of technology-driven solutions to crackdown on car theft. They call this system BlueNet.
To help prevent fraudulent sales of stolen vehicles, there is already a VicRoads web-based service for checking the status of vehicle registrations. The department has also invested in a stationary license plate scanner — a fixed tripod camera which scans passing traffic to automatically identify stolen vehicles.
Don’t ask me why, but one afternoon I had the desire to prototype a vehicle-mounted license plate scanner that would automatically notify you if a vehicle had been stolen or was unregistered. Understanding that these individual components existed, I wondered how difficult it would be to wire them together.
But it was after a bit of googling that I discovered the Victoria Police had recently undergone a trial of a similar device, and the estimated cost of roll out was somewhere in the vicinity of $86,000,000. One astute commenter pointed out that the $86M cost to fit out 220 vehicles comes in at a rather thirsty $390,909 per vehicle.
Surely we can do a bit better than that.
Existing stationary license plate recognition systems
The Success Criteria
Before getting started, I outlined a few key requirements for product design.
Requirement #1: The image processing must be performed locally
Streaming live video to a central processing warehouse seemed the least efficient approach to solving this problem. Besides the whopping bill for data traffic, you’re also introducing network latency into a process which may already be quite slow.
Although a centralized machine learning algorithm is only going to get more accurate over time, I wanted to learn if an local on-device implementation would be “good enough”.
Requirement #2: It must work with low quality images
Since I don’t have a Raspberry Pi camera or USB webcam, so I’ll be using dashcam footage — it’s readily available and an ideal source of sample data. As an added bonus, dashcam video represents the overall quality of footage you’d expect from vehicle mounted cameras.
Requirement #3: It needs to be built using open source technology
Relying upon a proprietary software means you’ll get stung every time you request a change or enhancement — and the stinging will continue for every request made thereafter. Using open source technology is a no-brainer.
My solution
At a high level, my solution takes an image from a dashcam video, pumps it through an open source license plate recognition system installed locally on the device, queries the registration check service, and then returns the results for display.
The data returned to the device installed in the law enforcement vehicle includes the vehicle’s make and model (which it only uses to verify whether the plates have been stolen), the registration status, and any notifications of the vehicle being reported stolen.
If that sounds rather simple, it’s because it really is. For example, the image processing can all be handled by the openalpr library.
This is really all that’s involved to recognize the characters on a license plate:
A Minor Caveat
Public access to the VicRoads APIs is not available, so license plate checks occur via web scraping for this prototype. While generally frowned upon — this is a proof of concept and I’m not slamming anyone’s servers.
Here’s what the dirtiness of my proof-of-concept scraping looks like:
Results
I must say I was pleasantly surprised.
I expected the open source license plate recognition to be pretty rubbish. Additionally, the image recognition algorithms are probably not optimised for Australian license plates.
The solution was able to recognise license plates in a wide field of view.
Annotations added for effect. Number plate identified despite reflections and lens distortion.
Although, the solution would occasionally have issues with particular letters.
Incorrect reading of plate, mistook the M for an H
But … the solution would eventually get them correct.
A few frames later, the M is correctly identified and at a higher confidence rating
As you can see in the above two images, processing the image a couple of frames later jumped from a confidence rating of 87% to a hair over 91%.
I’m confident, pardon the pun, that the accuracy could be improved by increasing the sample rate, and then sorting by the highest confidence rating. Alternatively a threshold could be set that only accepts a confidence of greater than 90% before going on to validate the registration number.
Those are very straight forward code-first fixes, and don’t preclude the training of the license plate recognition software with a local data set.
The $86,000,000 Question
To be fair, I have absolutely no clue what the $86M figure includes — nor can I speak to the accuracy of my open source tool with no localized training vs. the pilot BlueNet system.
I would expect part of that budget includes the replacement of several legacy databases and software applications to support the high frequency, low latency querying of license plates several times per second, per vehicle.
On the other hand, the cost of ~$391k per vehicle seems pretty rich — especially if the BlueNet isn’t particularly accurate and there are no large scale IT projects to decommission or upgrade dependent systems.
Future Applications
While it’s easy to get caught up in the Orwellian nature of an “always on” network of license plate snitchers, there are many positive applications of this technology. Imagine a passive system scanning fellow motorists for an abductors car that automatically alerts authorities and family members to their current location and direction.
Teslas vehicles are already brimming with cameras and sensors with the ability to receive OTA updates — imagine turning these into a fleet of virtual good samaritans. Ubers and Lyft drivers could also be outfitted with these devices to dramatically increase the coverage area.
Using open source technology and existing components, it seems possible to offer a solution that provides a much higher rate of return — for an investment much less than $86M.
De populaire betaal-app Tikkie biedt de mogelijkheid om geld over te boeken naar andere Tikkie-gebruikers op basis van hun 06-nummer. Daardoor was het mogelijk om de IBAN-nummers van vele nietsvermoedende Tikkie-gebruikers te achterhalen, met het gevaar voor identiteitsfraude en phishing.
Dat blijkt uit onderzoek van RTL Nieuws. ABN Amro bevestigt de kwetsbaarheid en heeft de nieuwe functie, Tikkie Pay, tijdelijk offline gehaald. “Bedankt voor de oplettendheid”, aldus de woordvoerder.
IBAN-nummers
Tikkie, dat 4 miljoen gebruikers heeft, toonde met zijn nieuwe functie alle gebruikers uit jouw contactenlijst die hun 06-nummer aan Tikkie hebben gekoppeld. Je kon op een naam drukken, vervolgens een bedrag overmaken en net voor de overboeking de Tikkie annuleren. In de omschrijving van de overboeking zag je dan het IBAN-nummer van de ontvanger, zonder dat diegene daar weet van heeft.
Why replace your things just because they’re not state-of-the-art? Smartians are cloud-connected motors that breathe new life into the things around you.
The Debian Project has patched a security flaw in its software manager Apt that can be exploited by network snoops to execute commands as root on victims’ boxes as they update or install packages.
The Linux distro’s curators have pushed out an fix to address CVE-2019-3462, a vulnerability uncovered and reported by researcher Max Justicz.
The flaw is related to the way Apt and apt-get handle HTTP redirects when downloading packages. Apt fetches packages over plain-old HTTP, rather than a more secure HTTPS connection, and uses cryptographic signatures to check whether the downloaded contents are legit and haven’t been tampered with.
This unfortunately means a man-in-the-middle (MITM) miscreant who was able to intercept and tamper with a victim’s network connection could potentially inject a redirect into the HTTP headers to change the URL used to fetch the package.
And the hacker would be able to control the hashes used by Apt to check the downloaded package, passing the package manager legit values to masquerade the fetched malware as sanctioned software.
All in all, users can be fed malware that’s run as root during installation, allowing it to commandeer the machine.
[…]
As an added wrinkle, Apt is updated by Apt itself. And seeing as the update mechanism is insecure, folks need to take extra steps to install the security fix securely. Admins will want to first disable redirects (see below) and then go through the usual apt update and upgrade steps.
Google engineers have proposed changes to the open-source Chromium browser that will break content-blocking extensions, including various ad blockers.
Adblock Plus will most likely not be affected, though similar third-party plugins will, for reasons we will explain. The drafted changes will also limit the capabilities available to extension developers, ostensibly for the sake of speed and safety. Chromium forms the central core of Google Chrome, and, soon, Microsoft Edge.
In a note posted Tuesday to the Chromium bug tracker, Raymond Hill, the developer behind uBlock Origin and uMatrix, said the changes contemplated by the Manifest v3 proposal will ruin his ad and content blocking extensions, and take control of content away from users.
Content blockers may be used to hide or black-hole ads, but they have broader applications. They’re predicated on the notion that users, rather than anyone else, should be able to control how their browser presents and interacts with remote resources.
Manifest v3 refers to the specification for browser extension manifest files, which enumerate the resources and capabilities available to browser extensions. Google’s stated rationale for making the proposed changes, cutting off blocking plugins, is to improve security, privacy and performance, and supposedly to enhance user control.
Supermarkets create cheap “magic boxes” with end of life food in them. You can see where to pick them up on the app. Jumbo NL has started a pilot in 13 shops.
Het van oorsprong Deense initiatief Too Good To Go heeft na één jaar in Nederland meer dan 200.000 maaltijden gered van de vuilnisbak. De gelijknamige app heeft ondertussen al meer dan 250.000 geregistreerde gebruikers en meer dan 1000 partners met dekking in alle provincies in Nederland.
Op de kaart of in de lijst in de app kunnen consumenten bekijken welke locaties iets lekkers voor ze klaar hebben liggen tegen sluitingstijd. Vervolgens bestellen en betalen zij direct in de app.
Sinds gisteren is bij Jumbo een pilot met Too Goo To Go in 13 winkels gestart. De pilot duurt een maand en is de eerste stap op weg naar een mogelijke landelijke uitrol.
Gebruikers zien in de Too Good To Go app welke Jumbo winkels een Magic Box aanbieden. Ze rekenen deze vervolgens af via de app en kunnen de verrassingsbox binnen een afgesproken tijdsslot ophalen in de winkel. De prijs is altijd een derde van de daadwerkelijke waarde: een box met een waarde van 15 euro kost dus slechts 5 euro.
Deelnemers aan de pilot zijn elf winkels in Amsterdam – waaronder de City winkels – en Foodmarkt Amsterdam en een City in Groningen.
Winkels bepalen zelf hoe ze de box samenstellen, waarbij beschikbaarheid en variatie belangrijke criteria zijn.
Vanaf vandaag is de stad Wageningen ook als locatie toegevoegd aan de app. Om de impact van de app van Too Good To Go op het consumentengedrag te meten en om te bepalen wat de volgende stukjes van de puzzel moeten worden, start Too Good To Go in samenwerking met Wageningen University & Research een onderzoek naar de verandering in bewustwording en het gedrag rond voedselverspilling.
Too Good To Go is al actief in negen Europese landen.
Last December, a whopping 3 terabytes of unprotected data from the Oklahoma Securities Commission was uncovered by Greg Pollock, a researcher with cybersecurity firm UpGuard. It amounted to millions of files, many on sensitive FBI investigations, all of which were left wide open on a server with no password, accessible to anyone with an internet connection, Forbes can reveal.
“It represents a compromise of the entire integrity of the Oklahoma department of securities’ network,” said Chris Vickery, head of research at UpGuard, which is revealing its technical findings on Wednesday. “It affects an entire state level agency. … It’s massively noteworthy.”
A breach back to the ’80s
The Oklahoma department regulates all financial securities business happening in the state. It may be little surprise there was leaked information on FBI cases. But the amount and variety of data astonished Vickery and Pollock.
Vickery said the FBI files contained “all sorts of archive enforcement actions” dating back seven years (the earliest file creation date was 2012). The documents included spreadsheets with agent-filled timelines of interviews related to investigations, emails from parties involved in myriad cases and bank transaction histories. There were also copies of letters from subjects, witnesses and other parties involved in FBI investigations.
[…]
Just as concerning, the leak also included email archives stretching back 17 years, thousands of social security numbers and data from the 1980s onwards.
[…]
After Vickery and Pollock disclosed the breach, they informed the commission it had mistakenly left open what’s known as an rsync server. Such servers are typically used to back up large batches of data and, if that information is supposed to be secure, should be protected by a username and password.
There were other signs of poor security within the leaked data. For instance, passwords for computers on the Oklahoma government’s network were also revealed. They were “not complicated,” quipped Chris Vickery, head of research on the UpGuard team. In one of the more absurd choices made by the department, it had stored an encrypted version of one document in the same file folder as a decrypted version. Passwords for remote access to agency computers were also leaked.
This is the latest in a series of incidents involving rsync servers. In December, UpGuard revealed that Level One Robotics, a car manufacturing supply chain company, was exposing information in the same way as the Oklahoma government division. Companies with data exposed in that event included Volkswagen, Chrysler, Ford, Toyota, General Motors and Tesla.
For whatever reason, governments and corporate giants alike still aren’t aware how easy it is for hackers to constantly scan the Web for such leaks. Starting with basics like passwords would help them keep their secrets secure.
Let’s Encrypt allows subscribers to validate
domain control using any one of a few different validation methods. For
much of the time Let’s Encrypt has been operating, the options were
“DNS-01”, “HTTP-01”, and “TLS-SNI-01”. We recently introduced the
“TLS-ALPN-01” method. Today we are announcing that we will end all
support for the TLS-SNI-01 validation method on February 13, 2019.
In January of 2018 we disabled the TLS-SNI-01 domain validation method for most subscribers due to a vulnerability enabled by some shared hosting infrastructure 1.1k.
We provided temporary exceptions for renewals and for a small handful
of hosting providers in order to smooth the transition to DNS-01 and
HTTP-01 validation methods. Most subscribers are now using DNS-01 or
HTTP-01.
If you’re still using TLS-SNI-01, please switch to one of the other
validation methods as soon as possible. We will also attempt to contact
subscribers who are still using TLS-SNI-01, if they provided contact
information.
We apologize for any inconvenience but we believe this is the right thing to do for the integrity of the Web PKI.
A team of researchers based at the Universities of Oxford and Edinburgh have recreated for the first time the famous Draupner freak wave measured in the North Sea in 1995.
The Draupner wave was one of the first confirmed observations of a freak wave in the ocean; it was observed on the 1st of January 1995 in the North Sea by measurements made on the Draupner Oil Platform. Freak waves are unexpectedly large in comparison to surrounding waves. They are difficult to predict, often appearing suddenly without warning, and are commonly attributed as probable causes for maritime catastrophes such as the sinking of large ships.
The team of researchers set out to reproduce the Draupner wave under laboratory conditions to understand how this freak wave was formed in the ocean. They successfully achieved this reconstruction by creating the wave using two smaller wave groups and varying the crossing angle – the angle at which the two groups travel.
Dr. Mark McAllister at the University of Oxford’s Department of Engineering Science said: “The measurement of the Draupner wave in 1995 was a seminal observation initiating many years of research into the physics of freak waves and shifting their standing from mere folklore to a credible real-world phenomenon. By recreating the Draupner wave in the lab we have moved one step closer to understanding the potential mechanisms of this phenomenon.”
It was the crossing angle between the two smaller groups that proved critical to the successful reconstruction. The researchers found it was only possible to reproduce the freak wave when the crossing angle between the two groups was approximately 120 degrees.
When waves are not crossing, wave breaking limits the height that a wave can achieve. However, when waves cross at large angles, wave breaking behaviour changes and no longer limits the height a wave can achieve in the same manner.
Prof Ton van den Bremer at the University of Oxford said: “Not only does this laboratory observation shed light on how the famous Draupner wave may have occurred, it also highlights the nature and significance of wave breaking in crossing sea conditions. The latter of these two findings has broad implications, illustrating previously unobserved wave breaking behaviour, which differs significantly from current state-of-the-art understanding of ocean wave breaking.”
To the researchers’ amazement, the wave they created bore an uncanny resemblance to “The Great Wave off Kanagawa’ – also known as “The Great Wave’ – a woodblock print published in the early 1800s by the Japanese artist Katsushika Hokusai. Hokusai’s image depicts an enormous wave threatening three fishing boats and towers over Mount Fuji which appears in the background. Hokusai’s wave is believed to depict a freak, or ‘rogue,” wave.
The laboratory-created freak wave also bears strong resemblances with photographs of freak waves in the ocean.
The researchers hope that this study will lay the groundwork for being able to predict these potentially catastrophic and hugely damaging waves that occur suddenly in the ocean without warning.
TAUS, the language data network, is an independent and neutral industry organization. We develop communities through a program of events and online user groups and by sharing knowledge, metrics and data that help all stakeholders in the translation industry develop a better service. We provide data services to buyers and providers of language and translation services.
The shared knowledge and data help TAUS members decide on effective localization strategies. The metrics support more efficient processes and the normalization of quality evaluation. The data lead to improved translation automation.
TAUS develops APIs that give members access to services like DQF, the DQF Dashboard and the TAUS Data Market through their own translation platforms and tools. TAUS metrics and data are already built in to most of the major translation technologies.
An online casino group has leaked information on over 108 million bets, including details about customers’ personal information, deposits, and withdrawals, ZDNet has learned.
The data leaked from an ElasticSearch server that was left exposed online without a password, Justin Paine, the security researcher who discovered the server, told ZDNet.
ElasticSearch is a portable, high-grade search engine that companies install to improve their web apps’ data indexing and search capabilities. Such servers are usually installed on internal networks and are not meant to be left exposed online, as they usually handle a company’s most sensitive information.
Last week, Paine came across one such ElasticSearch instance that had been left unsecured online with no authentication to protect its sensitive content. From a first look, it was clear to Paine that the server contained data from an online betting portal.
Despite being one server, the ElasticSearch instance handled a huge swathe of information that was aggregated from multiple web domains, most likely from some sort of affiliate scheme, or a larger company operating multiple betting portals.
After an analysis of the URLs spotted in the server’s data, Paine and ZDNet concluded that all domains were running online casinos where users could place bets on classic cards and slot games, but also other non-standard betting games.
Some of the domains that Paine spotted in the leaky server included kahunacasino.com, azur-casino.com, easybet.com, and viproomcasino.net, just to name a few.
After some digging around, some of the domains were owned by the same company, but others were owned by companies located in the same building at an address in Limassol, Cyprus, or were operating under the same eGaming license number issued by the government of Curacao –a small island in the Carribean– suggesting that they were most likely operated by the same entity.
The user data that leaked from this common ElasticSearch server included a lot of sensitive information, such as real names, home addresses, phone numbers, email addresses, birth dates, site usernames, account balances, IP addresses, browser and OS details, last login information, and a list of played games.
A very small portion of the redacted user data leaked by the server
Furthermore, Paine also found roughly 108 million records containing information on current bets, wins, deposits, and withdrawals. Data on deposits and withdrawals also included payment card details.
A very small portion of the redacted transaction data leaked by the server
The good news is that the payment card details indexed in the ElasticSearch server were partially redacted, and they were not exposing the user’s full financial details.
The bad news is that anyone who found the database would have known the names, home addresses, and phone numbers of players who recently won large sums of money and could have used this information to target users as part of scams or extortion schemes.
ZDNet reached out with emails to all the online portals whose data Paine identified in the leaky server. At the time of writing, we have not received any response from any of the support teams we contacted last week, but today, the leaky server went offline and is not accessible anymore.
Google has been hit by a €50 million ($57 million) fine by French data privacy body CNIL (National Data Protection Commission) for failure to comply with the EU’s General Data Protection Regulation (GDPR) regulations.
The CNIL said that it was fining Google for “lack of transparency, inadequate information and lack of valid consent regarding the ads personalization,” according to a press release issued by the organization. The news was first reported by the AFP.
[…]
The crux of the complaints leveled at Google is that it acted illegally by forcing users to accept intrusive terms or lose access to the service. This “forced consent,” it’s argued, runs contrary to the principles set out by the GDPR that users should be allowed to choose whether to allow companies to use their data. In other words, technology companies shouldn’t be allowed to adopt a “take it or leave it” approach to getting users to agree to privacy-intruding terms and conditions.
[…]
The watchdog found two core privacy violations. First, it observed that the visibility of information relating to how Google processes data, for how long it stores it, and the kinds of information it uses to personalize advertisements, is not easy to access. It found that this information was “excessively disseminated across several documents, with buttons and links on which it is required to click to access complementary information.”
So in effect, the CNIL said there was too much friction for users to find the information they need, requiring up to six separate actions to get to the information. And even when they find the information, it was “not always clear nor comprehensive.” The CNIL stated:
Users are not able to fully understand the extent of the processing operations carried out by Google. But the processing operations are particularly massive and intrusive because of the number of services offered (about twenty), the amount and the nature of the data processed and combined. The restricted committee observes in particular that the purposes of processing are described in a too generic and vague manner, and so are the categories of data processed for these various purposes.
Secondly, the CNIL said that it found that Google does not “validly” gain user consent for processing their data to use in ads personalization. Part of the problem, it said, is that the consent it collects is not done so through specific or unambiguous means — the options involve users having to click additional buttons to configure their consent, while too many boxes are pre-selected and require the user to opt out rather than opt in. Moreover, Google, the CNIL said, doesn’t provide enough granular controls for each data-processing operation.
As provided by the GDPR, consent is ‘unambiguous’ only with a clear affirmative action from the user (by ticking a non-pre-ticked box for instance).
The BitTorrent protocol has a decentralized nature but the ecosystem surrounding it has some weak spots. Torrent sites, for example, use centralized search engines which are prone to outages and takedowns. Torrent-Paradise tackles this problem with IPFS, a searchable torrent indexer that’s shared by the people.
IPFS, short for InterPlanetary File System, has been around for a few years now.
While the name sounds alien to most people, it has a growing userbase among the tech-savvy.
In short, IPFS is a decentralized network where users make files available among each other. If a website uses IPFS, it is served by a “swarm” of people, much like BitTorrent users do when a file is shared.
The advantage of this system is that websites can become completely decentralized. If a website or other resource is hosted with IPFS, it remains accessible as long as the computer of one user who “pinned” it remains online.
The advantages of IPFS are clear. It allows archivists, content creators, researchers, and many others to distribute large volumes of data over the Internet. It’s censorship resistant and not vulnerable to regular hosting outages.
One day, hospital patients might be able to ingest tiny robots that deliver drugs directly to diseased tissue, thanks to research being carried out at EPFL and ETH Zurich.
A group of scientists led by Selman Sakar at EPFL and Bradley Nelson at ETH Zurich drew inspiration from bacteria to design smart, highly flexible biocompatible micro-robots. Because these devices are able to swim through fluids and modify their shape when needed, they can pass through narrow blood vessels and intricate systems without compromising on speed or maneuverability. They are made of hydrogel nanocomposites that contain magnetic nanoparticles, allowing them to be controlled via an electromagnetic field.
In an article appearing in Science Advances, the scientists describe a method for programming the robot’s shape so that it can easily travel through fluids that are dense, viscous or moving at rapid speeds.
Embodied intelligence
Fabricating miniaturized robots presents a host of challenges, which the scientists addressed using an origami-based folding method. Their novel locomotion strategy employs embodied intelligence, which is an alternative to the classical computation paradigm that is performed by embedded electronic systems. “Our robots have a special composition and structure that allows them to adapt to the characteristics of the fluid they are moving through. For instance, if they encounter a change in viscosity or osmotic concentration, they modify their shape to maintain their speed and maneuverability without losing control of the direction of motion,” says Sakar.
WPML (or WP MultiLingual), the most popular WordPress plugin for translating and serving WordPress sites in multiple languages.
According to its website, WPML has over 600,000 paying customers and is one of the very few WordPress plugins that is so reputable that it doesn’t need to advertise itself with a free version on the official WordPress.org plugins repository.
But on Saturday, ET timezone, the plugin faced its first major security incident since its launch in 2007.
The attacker, which the WPML team claims is a former employee, sent out a mass email to all the plugin’s customers. In the email, the attacker claimed he was a security researcher who reported several vulnerabilities to the WPML team, which were ignored. The email[1, 2, 3, 4, 5] urged customers to check their sites for possible compromises.
But the WPML team vehemently disputed these claims. Both on Twitter[1, 2] and in a follow-up mass email, the WPML team said the hacker is a former employee who left a backdoor on its official website and used it to gain access to its server and its customer database.
WPML claims the hacker used the email addresses and customer names he took from the website’s database to send the mass email, but he also used the backdoor to deface its website, leaving the email’s text as a blog post on its site [archived version here].
Developers said the former employee didn’t get access to financial information, as they don’t store this kind of details, but they didn’t rule that he could now log into customers’ WPML.org accounts as a result of compromising the site’s database.
The company says it’s now rebuilding its server from scratch to remove the backdoor and resetting all customer account passwords as a precaution.
The WPML team also said the hacker didn’t gain access to the source code of its official plugin and did not push a malicious version to customers’ sites.
The company and its management weren’t available for additional questions regarding the incident. It is unclear if they reported the employee to authorities at the time of writing. If the company’s claim is true, there is little chance of the former employee escaping jail time.
One of the successful projects will see albatrosses and petrels benefit from further research using ‘bird-borne’ radar devices. Developed by scientists at the British Antarctic Survey (BAS), the attached radars will measure how often tracked wandering albatrosses interact with legal and illegal fishing vessels in the south Atlantic to map the areas and times when birds of different age and sex are most susceptible to bycatch – becoming caught up in fishing nets.
The project’s results will be shared with stakeholders to better target bycatch observer programmes, monitor compliance with bycatch mitigation and highlight the impact of bycatch on seabirds.
The UK is a signatory to the Agreement on the Conservation of Albatrosses and Petrels (ACAP), part of the Convention on Migratory Species of Wild Animals (CMS). This agreement has been extremely successful in substantially reducing levels of seabird bycatch in a number of important fisheries where rates have been reduced to virtually zero from levels that were historically concerning.
Professor Richard Phillips, leader of the Higher Predators and Conservation group at the British Antarctic Survey (BAS) said:
The British Antarctic Survey is delighted to be awarded this funding from Darwin Plus, which is for a collaboration between BAS and BirdLife International. The project will use a range of technologies – GPS, loggers that record 3-D acceleration and novel radar-detecting tags – to quantify interactions of tracked wandering albatrosses with legal and illegal fishing vessels. The technology will provide much-needed information on the areas and periods of highest bycatch.
Copyright activists just scored a major victory in the ongoing fight over the European Union’s new copyright rules. An upcoming summit to advance the EU’s copyright directive has been canceled, as member states objected to the incoming rules as too restrictive to online creators.
The EU’s forthcoming copyright rules had drawn attention from activists for two measures, designated as Article 11 and Article 13, that would give publishers rights over snippets of news content shared online (the so-called “link tax”) and increase platform liability for user content. Concerns about those two articles led to the intial proposal being voted down by the European parliament in July, but a version with new safeguards was approved the following September. Until recently, experts expected the resulting proposal to be approved by plenary vote in the coming months.
After today, the directive’s future is much less certain. Member states were gathered to approve a new version of the directive drafted by Romania — but eleven countries reportedly opposed the text, many of them citing familiar concerns over the two controversial articles. Crucially, Italy’s new populist government takes a far more skeptical view of the strict copyright proposals. Member states have until the end of February to approve a new version of the text, although it’s unclear what compromise might be reached.
Whatever rules the European Union adopts will have a profound impact on companies doing business online. In particular, Article 13 could greatly expand the legal risks of hosting user content, putting services like Facebook and YouTube in a difficult position. As Cory Doctorow described it to The Verge, “this is just ContentID on steroids, for everything.”
More broadly, Article 13 would expand platform’s liability for user-uploaded content. “If you’re a platform, then you are liable for the material which appears on your platform,” said professor Martin Kretschmer, who teaches intellectual property law at the University of Glasgow. “That’s the council position as of May, and that has huge problems.”
“Changing the copyright regime without really understanding where the problem is is foolish,” he continued.
Still, today’s vote suggests the ongoing activism against the proposals is having an effect. “Public attention to the copyright reform is having an effect,” wrote Pirate Party representative Julia Reda in a blog post. “Keeping up the pressure in the coming weeks will be more important than ever to make sure that the most dangerous elements of the new copyright proposal will be rejected.”
[…] today most of us have indoor jobs, and when we do go outside, we’ve been taught to protect ourselves from dangerous UV rays, which can cause skin cancer. Sunscreen also blocks our skin from making vitamin D, but that’s OK, says the American Academy of Dermatology, which takes a zero-tolerance stance on sun exposure: “You need to protect your skin from the sun every day, even when it’s cloudy,” it advises on its website. Better to slather on sunblock, we’ve all been told, and compensate with vitamin D pills.
Yet vitamin D supplementation has failed spectacularly in clinical trials. Five years ago, researchers were already warning that it showed zero benefit, and the evidence has only grown stronger. In November, one of the largest and most rigorous trials of the vitamin ever conducted—in which 25,871 participants received high doses for five years—found no impact on cancer, heart disease, or stroke.
How did we get it so wrong? How could people with low vitamin D levels clearly suffer higher rates of so many diseases and yet not be helped by supplementation?
As it turns out, a rogue band of researchers has had an explanation all along. And if they’re right, it means that once again we have been epically misled.
These rebels argue that what made the people with high vitamin D levels so healthy was not the vitamin itself. That was just a marker. Their vitamin D levels were high because they were getting plenty of exposure to the thing that was really responsible for their good health—that big orange ball shining down from above.
Last spring, Marketplace host Charlsie Agro and her twin sister, Carly, bought home kits from AncestryDNA, MyHeritage, 23andMe, FamilyTreeDNA and Living DNA, and mailed samples of their DNA to each company for analysis.
Despite having virtually identical DNA, the twins did not receive matching results from any of the companies.
In most cases, the results from the same company traced each sister’s ancestry to the same parts of the world — albeit by varying percentages.
But the results from California-based 23andMe seemed to suggest each twin had unique twists in their ancestry composition.
According to 23andMe’s findings, Charlsie has nearly 10 per cent less “broadly European” ancestry than Carly. She also has French and German ancestry (2.6 per cent) that her sister doesn’t share.
The identical twins also apparently have different degrees of Eastern European heritage — 28 per cent for Charlsie compared to 24.7 per cent for Carly. And while Carly’s Eastern European ancestry was linked to Poland, the country was listed as “not detected” in Charlsie’s results.
“The fact that they present different results for you and your sister, I find very mystifying,” said Dr. Mark Gerstein, a computational biologist at Yale University.
[…]
AncestryDNA found the twins have predominantly Eastern European ancestry (38 per cent for Carly and 39 per cent for Charlsie).
But the results from MyHeritage trace the majority of their ancestry to the Balkans (60.6 per cent for Carly and 60.7 per cent for Charlsie).
One of the more surprising findings was in Living DNA’s results, which pointed to a small percentage of ancestry from England for Carly, but Scotland and Ireland for Charlsie.
Another twist came courtesy of FamilyTreeDNA, which assigned 13-14 per cent of the twins’ ancestry to the Middle East — significantly more than the other four companies, two of which found no trace at all.
Paul Maier, chief geneticist at FamilyTreeDNA, acknowledges that identifying genetic distinctions in people from different places is a challenge.
“Finding the boundaries is itself kind of a frontiering science, so I would say that makes it kind of a science and an art,” Maier said in a phone interview.
The current DNS is unnecessarily slow and suffers from inability to deploy new features. To remediate these problems, vendors of DNS software and also big public DNS providers are going to remove certain workarounds on February 1st, 2019.
This change affects only sites which operate software which is not following published standards.
[…]
On or around Feb 1st, 2019, major open source resolver vendors will release updates that implement stricter EDNS handling. Specifically, the following versions introduce this change:
BIND 9.13.3 (development) and 9.14.0 (production)
Knot Resolver already implemented stricter EDNS handling in all current versions
Minimal working setup which will allow your domain to survive 2019 DNS flag day must not have timeout result in any of plain DNS and EDNS version 0 tests implemented in ednscomp tool. Please note that this minimal setup is still not standards compliant and will cause other issues sooner or later. For this reason we strongly recommend you to get full EDNS compliance (all tests ok) instead of doing just minimal cleanup otherwise you will have to face new issues later on.
[…]
Firewalls must not drop DNS packets with EDNS extensions, including unknown extensions. Modern DNS software may deploy new extensions (e.g. DNS cookies to protect from DoS attacks). Firewalls which drop DNS packets with such extensions are making the situation worse for everyone, including worsening DoS attacks and inducing higher latency for DNS traffic.
DNS software developers
The main change is that DNS software from vendors named above will interpret timeouts as sign of a network or server problem. Starting February 1st, 2019 there will be no attempt to disable EDNS as reaction to a DNS query timeout.
This effectively means that all DNS servers which do not respond at all to EDNS queries are going to be treated as dead.