I’m a crime-fighter, says FamilyTreeDNA boss after being caught giving folks’ DNA data to FBI

Some would argue he has broken every ethical and moral rule of his in his profession, but genealogist Bennett Greenspan prefers to see himself as a crime-fighter.

“I spent many, many nights and many, many weekends thinking of what privacy and confidentiality would mean to a genealogist such as me,” the founder and president of FamilyTreeDNA says in a video that appeared online yesterday.

He continues: “I would never do anything to betray the trust of my customers and at the same time I felt it important to enable my customers to crowd source the catching of criminals.”

The video and surrounding press release went out at 10.30pm on Thursday. Funnily enough, just a couple of hours earlier, BuzzFeed offered a very different take on Greenspan’s philanthropy. “One Of The Biggest At-Home DNA Testing Companies Is Working With The FBI,” reads the headline.

Here’s how FamilyTreeDNA works, if you don’t know: among other features, you submit a sample of your DNA to the biz, and it will tell you if you’re related to someone else who has also submitted their genetic blueprint. It’s supposed to find previously unknown relatives, check parentage, and so on.

And so, by crowd sourcing, what Greenspan means is that he has reached an agreement with the FBI to allow the agency to create new profiles on his system using DNA collected from, say, corpses, crime scenes, and suspects. These can then be compared with genetic profiles in the company’s database to locate and track down relatives of suspects and victims, if not the suspects and victims themselves.

[…]

Those profiles have been built by customers who have paid between $79 and $199 to have their generic material analyzed, in large part to understand their personal history and sometimes find connections to unknown family members. The service and others like it have become popular with adopted children who wish to locate birth parents but are prevented from being given by the information by law.

However, there is a strong expectation that any company storing your most personal generic information will apply strict confidentiality rules around it. You could argue that handing it over to the Feds doesn’t meet that standard. Greenspan would disagree.

“Greenspan created FamilyTreeDNA to help other family researchers solve problems and break down walls to connect the dots of their family trees,” reads a press release rushed out to head off, in vain, any terrible headlines.

“Without realizing it, he had inadvertently created a platform that, nearly two decades later, would help law enforcement agencies solve violent crimes faster than ever.”

Crime fighting, it seems, overrides all other ethical considerations.

Unfortunately for Greenspan, the rest of his industry doesn’t agree. The Future of Privacy Forum, an organization that maintains a list of consumer DNA testing companies that have signed up to its privacy guidelines struck FamilyTreeDNA off its list today.

Its VP of policy, John Verdi, told Bloomberg that the deal between FamilyTreeDNA and the FBI was “deeply flawed.” He went on: “It’s out of line with industry best practices, it’s out of line with what leaders in the space do, and it’s out of line with consumer expectations.”

Source: I’m a crime-fighter, says FamilyTreeDNA boss after being caught giving folks’ DNA data to FBI • The Register

Officer jailed for using police database to access personal details of dozens of Tinder dates

A former long-serving police officer has been jailed for six months for illegally accessing the personal details of almost 100 women to determine if they were “suitable” dates.

Adrian Trevor Moore was a 28-year veteran of WA Police and was nominated as police officer of the year in 2011.

The former senior constable pleaded guilty to 180 charges of using a secure police database to access the information of 92 women he had met, or interacted with, on dating websites including Tinder and Plenty of Fish.

A third of the women were checked by Moore multiple times over several years.

Source: Officer jailed for using police database to access personal details of dozens of Tinder dates – ABC News (Australian Broadcasting Corporation)

Well, that’s what you get when you collect loads of personal data in a database.

Unsecured MongoDB databases expose Kremlin’s single username / password backdoor into Russian businesses

A Dutch security researcher has stumbled upon the Kremlin’s backdoor account that the government had been using to access the servers of local and foreign businesses operating in Russia.

The backdoor account was found inside thousands of MongoDB databases that had been left exposed online without a password.

Any hacker who noticed the account could have used it to gain access to sensitive information from thousands of companies operating in Russia.

“The first time I saw these credentials was in the user table of a Russian Lotto website,” Victor Gevers told ZDNet in an interview today. “I had to do some digging to understand that the Kremlin requires remote access to systems that handle financial transactions.”

The researcher says that after his initial finding, he later found the same “admin@kremlin.ru” account on over 2,000 other MongoDB databases that had been left exposed online, all belonging to local and foreign businesses operating in Russia.

Examples include databases belonging to local banks, financial institutions, big telcos, and even Disney Russia.

Kremlin credentials found in the internet-exposed database of a Russian lotto agency
Kremlin credentials found in the internet-exposed database of a Russian lotto agency

Image: Victor Gevers

Kremlin credentials found in the internet-exposed database of Disney Russia
Kremlin credentials found in the internet-exposed database of Disney Russia

Image: Victor Gevers

Gevers even found this account inside a leaky MongoDB database belonging to Ukraine’s Ministry of Internal Affairs that was holding details about ERDR investigations carried out by the country’s General Prosecutor’s Office into corrupt politicians.

This latter case was very strange because, at the time, the Russian-Ukrainian conflict had already been raging for at least two years.

Kremlin credentials found in the internet-exposed database of a Ukrainian ministry
Kremlin credentials found in the internet-exposed database of a Ukrainian ministry

Image: Victor Gevers

Gevers, who at the time was the Chairman of the GDI Foundation, is one of the world’s top white-hat hackers. His research didn’t include digging through companies’ logs to see what this account was used for, so it’s currently unknown if the Russian government used this account only to retrieve financial-related information or they actively altered data.

“We have been searching for open MongoDB for years,” Gevers told ZDNet. “When we investigate a MongoDB instance, we try to respect privacy as much as possible by limiting the search for breadcrumbs such as the owner’s email addresses to a minimum.”

“All the systems this password was on were already fully accessible to anyone,” Gevers said. “The MongoDB databases were deployed with default settings. So anyone without authentication had CRUD [Create, Read, Update and Delete] access.”

Source: Unsecured MongoDB databases expose Kremlin’s backdoor into Russian businesses | ZDNet

European Commission orders mass recall of creepy, leaky child-tracking Enox smartwatch

The latest weekly report includes German firm Enox’s Safe-KID-One watch, which is marketed to parents as a way of keeping tabs on their little ones – ostensibly to keep them safe – and comes with one-click buttons for speed-dialling family members.

However, the commission said the device does not comply with the Radio Equipment Directive and detailed “serious” risks associated with the device.

“The mobile application accompanying the watch has unencrypted communications with its backend server and the server enables unauthenticated access to data,” the directive said.

As a result, data on location history, phone numbers and device serial number can be found and changed.

“A malicious user can send commands to any watch making it call another number of his choosing, can communicate with the child wearing the device or locate the child through GPS,” the alert warned.

Source: European Commission orders mass recall of creepy, leaky child-tracking smartwatch • The Register

Doctors Zap the Brains of Awake Brain Surgery Patients to Make Them Laugh and Have Fun

A distinct pathway in the white matter part of the brain known as the cingulum bundle can be used to alleviate stress and anxiety during awake brain surgery, according to new research published today in The Journal of Clinical Investigation. When electrically stimulated, this pathway triggers instantaneous laughter in the patient. But unlike previous experiments, this laughter was also accompanied by positive, uplifting feelings. Preliminary research suggests this technique could be used to calm patients during awake brain surgery, with the authors of the new study, led by neuroscientist Kelly Bijanki from Emory University School of Medicine, saying the findings could also lead to innovative new treatments for depression, anxiety, and chronic pain.

Source: Doctors Zap the Brains of Awake Brain Surgery Patients to Make Them Laugh and Have Fun

Nest Secure has an unlisted disabled microphone (Edit: Google statement agrees!)

We received a statement from Google regarding the implication that the Nest Secure alarm system has had an unlisted microphone this whole time. It turns out that yes, the Nest Guard base system (the circular device with a keypad above) does have a built-in microphone that is not listed on the official spec sheet at Nest’s site. The microphone has been in an inactive state since the release of the Nest Secure, according to Google.

This unlisted mic is how the Nest Guard will be able to operate as a pseudo-Google Home with just a software update, as detailed below.

[…]

Once the Google Assistant is enabled, the mic is always on but only listening for the hotwords “Ok Google” or “Hey Google”. Google only stores voice-based queries after it recognizes those hotwords. Voice data and query contents are sent to Google servers for analysis and storage in My Activity.

[…]

Original Article, February 4, 2019 (02:20 PM ET): Owners of the Nest Secure alarm system have been able to use voice commands to control their home security through Google Assistant for a while now. However, to issue those commands, they needed a separate Google Assistant-powered device, like a smartphone or a Google Home smart speaker.

The reason for this limitation has always seemed straightforward: according to the official tech specs, there’s no onboard microphone in the Nest Secure system.

Source: Nest Secure has an unlisted disabled microphone (Edit: Google statement)

That’s pretty damn creepy

Hi, Jack’d: A little PSA for anyone using this dating-hook-up app… Anyone can slurp your private, public snaps • The Register

Dating-slash-hook-up app Jack’d is exposing to the public internet intimate snaps privately swapped between its users, allowing miscreants to download countless X-rated selfies without permission.

The phone application, installed more than 110,000 times on Android devices and also available for iOS, lets primarily gay and bi men chat each other up, exchange private and public pics, and arrange to meet.

Those photos, public and private, can be accessed by anyone with a web browser and who knows just where to look, though, it appears. As there is no authentication, no need to sign up to the app, and no limits in place, miscreants can therefore download the entire image database for further havoc and potential blackmail.

You may well want to delete your images until this issue is fixed.

We’re told the developers of the application were warned of the security vulnerability three months ago, and yet no fix has been made. We’ve repeatedly tried to contact the programmers to no avail. In the interests of alerting Jack’d users to the fact their highly NSFW pictures are facing the public internet, we’re publishing this story today, although we are withholding details of the flaw to discourage exploitation.

Source: Hi, Jack’d: A little PSA for anyone using this dating-hook-up app… Anyone can slurp your private, public snaps • The Register

Dirty dealing in the $175 billion Amazon Marketplace

Last August, Zac Plansky woke to find that the rifle scopes he was selling on Amazon had received 16 five-star reviews overnight. Usually, that would be a good thing, but the reviews were strange. The scope would normally get a single review a day, and many of these referred to a different scope, as if they’d been cut and pasted from elsewhere. “I didn’t know what was going on, whether it was a glitch or whether somebody was trying to mess with us,” Plansky says.

As a precaution, he reported the reviews to Amazon. Most of them vanished days later — problem solved — and Plansky reimmersed himself in the work of running a six-employee, multimillion-dollar weapons accessory business on Amazon. Then, two weeks later, the trap sprang. “You have manipulated product reviews on our site,” an email from Amazon read. “This is against our policies. As a result, you may no longer sell on Amazon.com, and your listings have been removed from our site.”

A rival had framed Plansky for buying five-star reviews, a high crime in the world of Amazon. The funds in his account were immediately frozen, and his listings were shut down. Getting his store back would take him on a surreal weeks-long journey through Amazon’s bureaucracy, one that began with the click of a button at the bottom of his suspension message that read “appeal decision.”

[…]

For sellers, Amazon is a quasi-state. They rely on its infrastructure — its warehouses, shipping network, financial systems, and portal to millions of customers — and pay taxes in the form of fees. They also live in terror of its rules, which often change and are harshly enforced. A cryptic email like the one Plansky received can send a seller’s business into bankruptcy, with few avenues for appeal.

Sellers are more worried about a case being opened on Amazon than in actual court, says Dave Bryant, an Amazon seller and blogger. Amazon’s judgment is swifter and less predictable, and now that the company controls nearly half of the online retail market in the US, its rulings can instantly determine the success or failure of your business, he says. “Amazon is the judge, the jury, and the executioner.”

Amazon is far from the only tech company that, having annexed a vast sphere of human activity, finds itself in the position of having to govern it. But Amazon is the only platform that has a $175 billion prize pool tempting people to game it, and the company must constantly implement new rules and penalties, which in turn, become tools for new abuses, which require yet more rules to police. The evolution of its moderation system has been hyper-charged. While Mark Zuckerberg mused recently that Facebook might need an analog to the Supreme Court to adjudicate disputes and hear appeals, Amazon already has something like a judicial system — one that is secretive, volatile, and often terrifying.

Amazon’s judgments are so severe that its own rules have become the ultimate weapon in the constant warfare of Marketplace. Sellers devise all manner of intricate schemes to frame their rivals, as Plansky experienced. They impersonate, copy, deceive, threaten, sabotage, and even bribe Amazon employees for information on their competitors.

[…]

Scammers have effectively weaponized Amazon’s anti-counterfeiting program. Attacks have become so widespread that they’ve even pulled in the US Patent and Trademark Office, which recently posted a warning that people were making unauthorized changes through its electronic filing system, likely “part of a scheme to register the marks of others on third-party ‘brand registries.’” Scammers had begun swapping out the email addresses on their rival’s trademark files, which can be done without a password, and using the new email to register their competitor’s brand with Amazon, gaining control of their listings. As Harris encountered, Amazon appears not to check whether a listing belongs to a brand already enrolled in brand registry. Stine has a client who had trademarked their party supply brand and registered it with Amazon, only to have a rival change their trademark file, register with Amazon, and hijack their listing for socks, which had things like “If you can read this, bring coffee” written on the soles.

[…]

There are more subtle methods of sabotage as well. Sellers will sometimes buy Google ads for their competitors for unrelated products — say, a dog food ad linking to a shampoo listing — so that Amazon’s algorithm sees the rate of clicks converting to sales drop and automatically demotes their product. They will go on the black market and purchase or rent seller accounts with special editing privileges and use them to change the color or description of their rival’s products so they get suspended for too many customers complaining about the item being “not as described.” They will exile their competitor’s listings to an unrelated category — say, move a product with a “Best Seller” badge in the office category to lawn care, taking the badge for themselves.

“They took a kids toy made for six to 12 year olds and they changed it to a sex toy,” one outraged seller told me. This is a common move, as Amazon hides products in that category unless the customer clicks a button saying they’re over 18. Another seller who had been battling counterfeiters of his childproof locks and outlet covers received a threat in Chinese saying that, while it is hard to build a listing like his, it would be easy to destroy. “Be cautious,” the message warned. Later, he too was banished to sex toys. “It’s suppressed from search results unless you literally search for a “sexual child proof door lock,” he says. (He had no sales.)

Source: Dirty dealing in the $175 billion Amazon Marketplace

An incredible story, very worth reading in its’ entirety

UAE used cyber super-weapon to spy on iPhones of foes

The cyber tool allowed the small Gulf country to monitor hundreds of targets beginning in 2016, from the Emir of Qatar and a senior Turkish official to a Nobel Peace laureate human-rights activist in Yemen, according to five former operatives and program documents reviewed by Reuters. The sources interviewed by Reuters were not Emirati citizens.

Karma was used by an offensive cyber operations unit in Abu Dhabi comprised of Emirati security officials and former American intelligence operatives working as contractors for the UAE’s intelligence services. The existence of Karma and of the hacking unit, code named Project Raven, haven’t been previously reported. Raven’s activities are detailed in a separate story published by Reuters today.

The ex-Raven operatives described Karma as a tool that could remotely grant access to iPhones simply by uploading phone numbers or email accounts into an automated targeting system. The tool has limits — it doesn’t work on Android devices and doesn’t intercept phone calls. But it was unusually potent because, unlike many exploits, Karma did not require a target to click on a link sent to an iPhone, they said.

Source: Exclusive: UAE used cyber super-weapon to spy on iPhones of foes | Reuters

Furious Apple revokes Facebook’s enty app cert after Zuck’s crew abused it to slurp private data

Facebook has yet again vowed to “do better” after it was caught secretly bypassing Apple’s privacy rules to pay adults and teenagers to install a data-slurping iOS app on their phones.

The increasingly worthless promises of the social media giant have fallen on deaf ears however: on Wednesday, Apple revoked the company’s enterprise certificate for its internal non-public apps, and one lawmaker vowed to reintroduce legislation that would make it illegal for Facebook to carry out such “research” in future.

The enterprise cert allows Facebook to sign iOS applications so they can be installed for internal use only, without having to go through the official App Store. It’s useful for intranet applications and in-house software development work.

Facebook, though, used the certificate to sign a market research iPhone application that folks could install it on their devices. The app was previously kicked out of the official App Store for breaking Apple’s rules on privacy: Facebook had to use the cert to skirt Cupertino’s ban.

[…]

With its certificate revoked, Facebook employees are reporting that their legitimate internal apps, also signed by the cert, have stopped working. The consumer iOS Facebook app is unaffected.

Trust us, we’re Facebook!

At the heart of the issue is an app for iPhones called “Facebook Research” that the company advertised through third parties. The app is downloaded outside of the normal Apple App Store, and gives Facebook extraordinary access to a user’s phone, allowing the company to see pretty much everything that person does on their device. For that trove of personal data, Facebook paid an unknown number of users aged between 13 and 35 up to $20 a month in e-gifts.

Source: Furious Apple revokes Facebook’s enty app cert after Zuck’s crew abused it to slurp private data • The Register

A person familiar with the situation tells The Verge that early versions of Facebook, Instagram, Messenger, and other pre-release “dogfood” (beta) apps have stopped working, as have other employee apps, like one for transportation. Facebook is treating this as a critical problem internally, we’re told, as the affected apps simply don’t launch on employees’ phones anymore.

https://www.theverge.com/2019/1/30/18203551/apple-facebook-blocked-internal-ios-apps

 

Defanged SystemD exploit code for security holes now out in the wild

In mid-January, Qualys, another security firm, released details about three flaws affecting systemd-journald, a systemd component that handles the collection and storage of log data. Patches for the vulnerabilities – CVE-2018-16864, CVE-2018-16865, and CVE-2018-16866 – have been issued by various Linux distributions.

Exploitation of these code flaws allows an attacker to alter system memory in order to commandeer systemd-journal, which permits privilege escalation to the root account of the system running the software. In other words, malware running on a system, or rogue logged-in users, can abuse these bugs to gain administrator-level access over the whole box, which is not great in uni labs and similar environments.

Nick Gregory, research scientists at Capsule8, in a blog post this week explains that his firm developed proof-of-concept exploit code for testing and verification. As in testing whether or not computers are at risk, and verifying the patches work.

“There are some interesting aspects that were not covered by Qualys’ initial publication, such as how to communicate with the affected service to reach the vulnerable component, and how to control the computed hash value that is actually used to corrupt memory,” he said.

Manipulated

The exploit script, written in Python 3, targets the 20180808.0.0 release of the ubuntu/bionic64 Vagrant image, and assumes that address space layout randomization (ASLR) is disabled. Typically, ASLR is not switched off in production systems, making this largely an academic exercise.

The script exploits CVE-2018-16865 via Linux’s alloca() function, which allocates the specified number of bytes of memory space in the stack frame of the caller; it can be used to manipulate the stack pointer.

Basically, by creating a massive number of log entries and appending them to the journal, the attacker can overwrite memory and take control of the vulnerable system.

Source: The D in SystemD stands for Danger, Will Robinson! Defanged exploit code for security holes now out in the wild • The Register

Hackers Are Passing Around a Megaleak of 2.2 Billion Records

Earlier this month, security researcher Troy Hunt identified the first tranche of that mega-dump, named Collection #1 by its anonymous creator, a set of cobbled-together breached databases Hunt said represented 773 million unique usernames and passwords. Now other researchers have obtained and analyzed an additional vast database called Collections #2–5, which amounts to 845 gigabytes of stolen data and 25 billion records in all. After accounting for duplicates, analysts at the Hasso Plattner Institute in Potsdam, Germany, found that the total haul represents close to three times the Collection #1 batch.

“This is the biggest collection of breaches we’ve ever seen,” says Chris Rouland, a cybersecurity researcher and founder of the IoT security firm Phosphorus.io, who pulled Collections #1–5 in recent days from torrented files. He says the collection has already circulated widely among the hacker underground: He could see that the tracker file he downloaded was being “seeded” by more than 130 people who possessed the data dump, and that it had already been downloaded more than 1,000 times. “It’s an unprecedented amount of information and credentials that will eventually get out into the public domain,” Rouland says.

Source: Hackers Are Passing Around a Megaleak of 2.2 Billion Records | WIRED

Criminals Are Tapping into the Phone Network Backbone using known insecure SS7 to Empty Bank Accounts

Sophisticated hackers have long exploited flaws in SS7, a protocol used by telecom companies to coordinate how they route texts and calls around the world. Those who exploit SS7 can potentially track phones across the other side of the planet, and intercept text messages and phone calls without hacking the phone itself.

This activity was typically only within reach of intelligence agencies or surveillance contractors, but now Motherboard has confirmed that this capability is much more widely available in the hands of financially-driven cybercriminal groups, who are using it to empty bank accounts. So-called SS7 attacks against banks are, although still relatively rare, much more prevalent than previously reported. Motherboard has identified a specific bank—the UK’s Metro Bank—that fell victim to such an attack.

The news highlights the gaping holes in the world’s telecommunications infrastructure that the telco industry has known about for years despite ongoing attacks from criminals. The National Cyber Security Centre (NCSC), the defensive arm of the UK’s signals intelligence agency GCHQ, confirmed that SS7 is being used to intercept codes used for banking.

“We are aware of a known telecommunications vulnerability being exploited to target bank accounts by intercepting SMS text messages used as 2-Factor Authentication (2FA),” The NCSC told Motherboard in a statement.

Source: Criminals Are Tapping into the Phone Network Backbone to Empty Bank Accounts – Motherboard

Personal data slurped in Airbus hack – but firm’s industrial smarts could be what crooks are after

Airbus has admitted that a “cyber incident” resulted in unidentified people getting their hands on “professional contact and IT identification details” of some Europe-based employees.

The company said in a brief statement published late last night that the breach is “being thoroughly investigated by Airbus’ experts”. The company has its own infosec business unit, Stormguard.

“Investigations are ongoing to understand if any specific data was targeted,” it continued, adding that it is in contact with the “relevant regulatory authorities”, which for Airbus is France’s CNIL data protection watchdog. We understand no customer data was accessed, while Airbus insists for the moment that there has been no impact on its commercial operations.

Airbus said the target was its Commercial Aircraft business unit, which employs around 10,000 people in the UK alone, split between two sites. The company said that only people in “Europe” were affected.

Source: Personal data slurped in Airbus hack – but firm’s industrial smarts could be what crooks are after • The Register

Facebook pays teens to install VPN that spies on them

Desperate for data on its competitors, Facebook has been secretly paying people to install a “Facebook Research” VPN that lets the company suck in all of a user’s phone and web activity, similar to Facebook’s Onavo Protect app that Apple banned in June and that was removed in August. Facebook sidesteps the App Store and rewards teenagers and adults to download the Research app and give it root access to network traffic in what may be a violation of Apple policy so the social network can decrypt and analyze their phone activity, a TechCrunch investigation confirms. Facebook admitted to TechCrunch it was running the Research program to gather data on usage habits.

Since 2016, Facebook has been paying users ages 13 to 35 up to $20 per month plus referral fees to sell their privacy by installing the iOS or Android “Facebook Research” app. Facebook even asked users to screenshot their Amazon order history page. The program is administered through beta testing services Applause, BetaBound and uTest to cloak Facebook’s involvement, and is referred to in some documentation as “Project Atlas” — a fitting name for Facebook’s effort to map new trends and rivals around the globe.

Source: Facebook pays teens to install VPN that spies on them | TechCrunch

Final Fantasy VII background graphics upscaled 4x by AI

The Remako HD Graphics Mod is a mod that completely revamps the pre-rendered backgrounds of the classic JRPG Final Fantasy VII. All of the backgrounds now have 4 times the resolution of the original.

Using state of the art AI neural networks, this upscaling tries to emulate the detail the original renders would have had. This helps the new visuals to come as close to a higher resolution re-rendering of the original as possible with current technology.

What does it look like?

Bbelow are two trailers. One is a comparison of the raw images, while the other shows off the mod in action.
If that’s still not enough, then please check out the screenshot gallery here.

Source: FF7 Remako HD Graphics Mod Beta Released

Custom firmware for lights allows you to control them with Homeassistant and more controllers

Sonoff B1, lights and shades

Six months ago I was reviewing the AiThinker AiLight, a great looking light bulb with an embedded ESP8266EX microcontroller, driven by a MY9291 LED driver. Just before summer IteadStudio released it’s Sonoff B1 [Itead.cc] light bulb, heavily inspired (probably same manufacturer) by the AiLight, at least on the design.

Now that IteadStudio has become popular between the home automation community you can also find the Sonoff B1 on global marketplaces like Ebay or Aliexpress for around 13€.

A closer look at the B1 uncovers some important differences. But before going deeper into the details let me first say that this post will probably look more like a review, at least more than I use to write. And second, yes: ESPurna supports the Sonoff B1 🙂

An unboxing?

Not quite so. I leave that to other people with better skills on the video editing world. Let me just tell you than the “box” is somewhat different from what I expected. You might recall the AiLight box: a simple beige drawer-like box with a “WiFi Light” text and a simple icon. No colors, pictures, specifications,… nothing.

Instead, the Sonoff B1 I received from IteadStudio comes in a colorful box, with the usual pictures and data you can find in retail products.

Inside the box the light bulb is comfy housed in a polyethylene foam, along with a quality control certification and a small “getting started” manual in English and Chinese.

A heat sink?

Don’t think so. The first thing I noticed when I opened the box was that the bulb was very similar to the AiLight, the second the only visual difference. It certainly looks like a big heat sink. I almost fear touching it while connected. But how much heat can you generate if the light is rated 6W? The bulb body houses a basic AC/DC power supply (90-250VAC to 12VDC) and is accessible unscrewing the metal frame (the heat-sink part from the smooth part with the “sonoff” logo).

The AiLight is also 6W and you can safely touch it, even when it has been at full power for a lot of time. The Sonoff B1 shouldn’t be different. So I’m lean towards thinking it’s an aesthetic decision. Unless there are some beefy power LEDs inside.

Power LEDs?

Not all of them. Anyway I think this is the aspect where the B1 clearly differentiates from the AiLight. The later has 8 cold white power LEDs, as well as 6 red, 4 green and 4 blue power LEDs. The Sonoff B1 also has 8 cold white ones. But then it features 8 warm white power LEDs and 3 5050 RGB LEDs!

I don’t have a luximeter but the difference when fully white between the two is hard to spot. But the warm white color really makes the difference in favor of the Sonoff bulb. On the other hand, the 3 5050 SMD LEDs are clearly not enough. Even more: since the RGB LEDs are closer to the center of the round PCB, just around the WiFi antenna, the shadow of the antenna is very noticeable if you are using a colored light.

Hard to tell which one is brighter for the naked eye…

The pic does not justice the difference. The right on is the AiLight with the white power LEDs at full duty. The left on is the Sonoff B1 using the warm white power LEDs (you can see the yellowish color in the wall). The cold white LEDs are brighter but, depending on the room, the warm white LEDs could be more suitable.

Both bulbs again, now with the red channel at full duty. No need for words.

3 5050 RGB LEDs, 3 shadows of the antenna

A view without the cap, red LEDs are at 100% duty cycle, white LEDs are only at 10%…

I think the Sonoff B1 could be a better choice when used to illuminate with a warm white light your living room or your bedroom than the AiLight. If you need a colorful illumination, discotheque moods or a nice cold white for your kitchen, use the AiLight. Another possible (and interesting) use for Sonoff B1 would be as a notification light using traffic light color code, for instance. Clearly visible but not disturbing colors.

The controller?

Not the same. It is actually an ESP8285. In practice, you can talk to it like if it was an ESP2866 with a 1Mb embedded flash using DOUT flash mode. So that’s my recommended configuration.

The ESP8285 and required components with the 5050 RGB LEDs

As you can see in the pictures, the PCB is actually 2 PCB, one for the power LEDs and the other one for the microcontroller, some components and the 5050 on the front, a buck converter (12VDC to 3.3VDC for the ESP8285) and the LED driver on the back. The two PCBs are soldered together and glued to the underneath support.

In the AiLight the LED driver is a MY9291 [datasheet, PDF] by My-Semi. The Sonoff B1 uses another My-Semi driver, the MY9231 [datasheet, PDF]. The MY9291 is a 4 channels LED driver but the MY9231 is just 3 channels… so how is it possible to do RGB plus white and warm? Well actually these ICs are daisy chainable, so there are two MY9231 controllers in the Sonoff B1, the first one controlling the white power LEDs and the second the 5050 RGB LEDs.

I did not want to remove the glue under the PCB. But you can glimpse one My-Semi controller through the bottom hole.

ESPurna?

The ESPurna firmware is released as free open software and can be checked out at my Espurna repository on GitHub.

Sure! You can flash the Sonoff B1 following the same procedure of the AiLight. There are 6 pads on the PCB labelled 3V3, RX, TX, GND, GPIO0 and SDA. You will need to wire the first 5 (tin you cable, apply a small drop on the pad and then heat them together). Connect RX to TX, TX to RX, GND to GND, GPIO0 to GND and finally 3V3 to the 3V3 power source of your programmer. It will then enter into flash mode (GPIO0 is grounded). You can either flash the bin file from the ESPurna downloads section or build your own image (check the ESPurna wiki for docs).

Wired flashing of the Sonoff B1

Since ESPurna version 1.9.0 you define and control any number of dimming channels, you can also define the first three to be RGB channels. If you do, the web UI will show you a colorpicker to select the color.

You can also control it via MQTT. It supports CSS notation, comma separated or color temperature, as well as brightness and status, of course.

1
2
3
4
5
6
7
8
9
10
11
// 100% red
mosquitto_pub -t /home/study/light/color/set -m "#FF0000";
// 100% warm white
mosquitto_pub -t /home/study/light/color/set -m "0,0,0,0,255";
// 300 mired color temperature
mosquitto_pub -t /home/study/light/color/set -m "M300";
// 4000 kelvin color temperature
mosquitto_pub -t /home/study/light/color/set -m "K4000";

Of course you can also use Home Assistant MQTT Light component. The configuration would look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
light:
  - platform: mqtt
    name: 'AI Light TEST'
    state_topic: '/home/study/light/relay/0'
    command_topic: '/home/study/light/relay/0/set'
    payload_on: 1
    payload_off: 0
    rgb_state_topic: '/home/study/light/color'
    rgb_command_topic: '/home/study/light/color/set'
    rgb: true
    optimistic: false
    color_temp: true
    color_temp_command_topic: '/home/study/light/mired/set'
    brightness: true
    brightness_command_topic: '/home/study/light/brightness/set'
    brightness_state_topic: '/home/study/light/brightness'
    white_value: true
    white_value_command_topic: '/home/study/light/channel/3/set'
    white_value_state_topic: '/home/study/light/channel/3'

Either way, flashing custom firmware like ESPurna on a 13€ Sonoff B1 [Ebay] device allows you to first fully control your device (no connections outside your home network if you don’t want to) and second, make it interoperate with other services like Home Assistant, Domoticz, Node-RED or any other MQTT o REST capable services.

After all, I’m talking about Technological Sovereignty.

Source: Sonoff B1, lights and shades – Tinkerman

Don’t Toss That Bulb, It Knows Your Password

As it turns out, giving every gadget you own access to your personal information and Internet connection can lead to unintended consequences. Who knew, right? But if you need yet another example of why trusting your home appliances with your secrets is potentially a bad idea, [Limited Results] is here to make sure you spend the next few hours doubting your recent tech purchases.

In a series of posts on the [Limited Results] blog, low-cost “smart” bulbs are cracked open and investigated to see what kind of knowledge they’ve managed to collect about their owners. Not only was it discovered that bulbs manufactured by Xiaomi, LIFX, and Tuya stored the WiFi SSID and encryption key in plain-text, but that recovering said information from the bulbs was actually quite simple. So next time one of those cheapo smart bulb starts flickering, you might want to take a hammer to it before tossing it in the trash can; you never know where it, and the knowledge it has of your network, might end up.

Regardless of the manufacturer of the bulb, the process to get one of these devices on your network is more or less the same. An application on your smartphone connects to the bulb and provides it with the network SSID and encryption key. The bulb then disconnects from the phone and reconnects to your home network with the new information. It’s a process that at this point we’re all probably familiar with, and there’s nothing inherently wrong with it.

The trouble comes when the bulb needs to store the connection information it was provided. Rather than obfuscating it in some way, the SSID and encryption key are simply stored in plain-text on the bulb’s WiFi module. Recovering that information is just a process of finding the correct traces on the bulb’s PCB (often there are test points which make this very easy), and dumping the chip’s contents to the computer for analysis.

It’s not uncommon for smart bulbs like these to use the ESP8266 or ESP32, and [Limited Results] found that to be the case here. With the wealth of information and software available for these very popular WiFi modules, dumping the firmware binary was no problem. Once the binary was in hand, a little snooping around with a hex editor was all it took to identify the network login information. The firmware dumps also contained information such as the unique hardware IDs used by the “cloud” platforms the bulbs connect to, and in at least one case, the root certificate and RSA private key were found.

On the plus side, being able to buy cheap smart devices that are running easily hackable modules like the ESP makes it easier for us to create custom firmware for them. Hopefully the community can come up with slightly less suspect software, but really just keeping the things from connecting to anything outside the local network would be a step in the right direction.

Source: Don’t Toss That Bulb, It Knows Your Password | Hackaday

Towards reconstructing intelligible speech from the human auditory cortex

To advance the state-of-the-art in speech neuroprosthesis, we combined the recent advances in deep learning with the latest innovations in speech synthesis technologies to reconstruct closed-set intelligible speech from the human auditory cortex. We investigated the dependence of reconstruction accuracy on linear and nonlinear (deep neural network) regression methods and the acoustic representation that is used as the target of reconstruction, including auditory spectrogram and speech synthesis parameters. In addition, we compared the reconstruction accuracy from low and high neural frequency ranges. Our results show that a deep neural network model that directly estimates the parameters of a speech synthesizer from all neural frequencies achieves the highest subjective and objective scores on a digit recognition task, improving the intelligibility by 65% over the baseline method which used linear regression to reconstruct the auditory spectrogram

Source: Towards reconstructing intelligible speech from the human auditory cortex | Scientific Reports

Data Leak in Singapore Exposes HIV Status of 14,000 Locals and Foreign Visitors

Medical records and contact information belonging to thousands of HIV-positive Singaporeans and foreign visitors to the southeast Asian city state have been leaked online, according to an alert issued by the country’s Ministry of Health (MOH).

In a statement on its website, the ministry said the confidential health information of some 14,200 individuals diagnosed with HIV had been exposed.

“The information has been illegally disclosed online,” it said. “We have worked with the relevant parties to disable access to the information.”

Source: Data Leak in Singapore Exposes HIV Status of 14,000 Locals and Foreign Visitors

This is why we don’t like centralised medical databases

Apple: You can’t sue us for slowing down your iPhones because we’re like a contractor in your house

Apple is like a building contractor you hire to redo your kitchen, the tech giant has argued in an attempt to explain why it shouldn’t have to pay customers for slowing down their iPhones.

Addressing a bunch of people trying to sue it for damages, the iGiant’s lawyers told [PDF] a California court this month: “Plaintiffs are like homeowners who have let a building contractor into their homes to upgrade their kitchens, thus giving permission for the contractor to demolish and change parts of the houses.”

They went on: “Any claim that the contractor caused excessive damage in the process sounds in contract, not trespass.”

[…]

In this particular case in the US, the plaintiffs argue that Apple damaged their phones by effectively forcing them to install software updates that were intended to fix the battery issues. They may have “chosen” to install the updates by tapping on the relevant buttons, but they did so after reading misleading statements about what the updates were and what they would do, the lawsuit claims.

Nonsense! says Apple. You invited us into your house. We did some work. Sorry you don’t like the fact that we knocked down the wall to the lounge and installed a new air vent through the ceiling, but that’s just how it is.

[…]

But that’s not the only disturbing image to emerge from this lawsuit. When it was accused of damaging people’s property by ruining their batteries, Apple argued – successfully – in court that consumers can’t reasonably expect their iPhone batteries to last longer than a year, given that its battery warranty runs out after 12 months. That would likely come as news to iPhone owners who don’t typically expect to spend $1,000 on a phone and have it die on them a year later.

Call of Duty

Apple has also argued that it’s not under any obligation to tell people buying its products about how well its batteries and software function. An entire section of the company’s motion to dismiss this latest lawsuit is titled: “Apple had no duty to disclose the facts regarding software capability and battery capacity.”

Of course, the truth is that Apple knows that it screwed up – and screwed up badly. Which is why last year it offered replacement batteries for just $29 rather than the usual $79. Uptake of the “program” was so popular that analysts say it has accounted for a significant drop-off in new iPhone purchases.

[…]

Ultimately of course, Apple remains convinced that it’s not really your phone at all: Cupertino has been good enough to allow you to use its amazing technology, and all you had to do was pay it a relatively small amount of money.

We should all be grateful that Apple lets us use our iPhones at all. And if it wants to slow them down, it can damn well slow them down without having to tell you because you wouldn’t understand the reasons why even if it bothered to explain them to you.

Source: Apple: You can’t sue us for slowing down your iPhones because you, er, invited us into, uh, your home… we can explain • The Register

This kind of reasoning beggars belief

Apple temporarily disables group FaceTime to fix a bug that lets you eavesdrop on your contacts

There was chaos on the internet late last night after 9to5Mac discovered a bug in Apple’s FaceTime video calling app that let you hear other person’s voice even before they answered your call. According to the report, a user running iOS 12.1 could potentially exploit the vulnerability to eavesdrop on others through a group FaceTime call.

What’s more, The Verge noted if the recipient ignored or dismissed the call using the power button, their video feed was streamed to the caller.

Source: Apple temporarily disables group FaceTime to fix a bug that lets you eavesdrop on your contacts

Google’s Sidewalk Labs Plans to Package and Sell Location Data on Millions of Cellphones

Most of the data collected by urban planners is messy, complex, and difficult to represent. It looks nothing like the smooth graphs and clean charts of city life in urban simulator games like “SimCity.” A new initiative from Sidewalk Labs, the city-building subsidiary of Google’s parent company Alphabet, has set out to change that.

The program, known as Replica, offers planning agencies the ability to model an entire city’s patterns of movement. Like “SimCity,” Replica’s “user-friendly” tool deploys statistical simulations to give a comprehensive view of how, when, and where people travel in urban areas. It’s an appealing prospect for planners making critical decisions about transportation and land use. In recent months, transportation authorities in Kansas City, Portland, and the Chicago area have signed up to glean its insights. The only catch: They’re not completely sure where the data is coming from.

Typical urban planners rely on processes like surveys and trip counters that are often time-consuming, labor-intensive, and outdated. Replica, instead, uses real-time mobile location data. As Nick Bowden of Sidewalk Labs has explained, “Replica provides a full set of baseline travel measures that are very difficult to gather and maintain today, including the total number of people on a highway or local street network, what mode they’re using (car, transit, bike, or foot), and their trip purpose (commuting to work, going shopping, heading to school).”

To make these measurements, the program gathers and de-identifies the location of cellphone users, which it obtains from unspecified third-party vendors. It then models this anonymized data in simulations — creating a synthetic population that faithfully replicates a city’s real-world patterns but that “obscures the real-world travel habits of individual people,” as Bowden told The Intercept.

The program comes at a time of growing unease with how tech companies use and share our personal data — and raises new questions about Google’s encroachment on the physical world.

If Sidewalk Labs has access to people’s unique paths of movement prior to making its synthetic models, wouldn’t it be possible to figure out who they are, based on where they go to sleep or work?

Last month, the New York Times revealed how sensitive location data is harvested by third parties from our smartphones — often with weak or nonexistent consent provisions. A Motherboard investigation in early January further demonstrated how cell companies sell our locations to stalkers and bounty hunters willing to pay the price.

For some, the Google sibling’s plans to gather and commodify real-time location data from millions of cellphones adds to these concerns. “The privacy concerns are pretty extreme,” Ben Green, an urban technology expert and author of “The Smart Enough City,” wrote in an email to The Intercept. “Mobile phone location data is extremely sensitive.” These privacy concerns have been far from theoretical. An Associated Press investigation showed that Google’s apps and website track people even after they have disabled the location history on their phones. Quartz found that Google was tracking Android users by collecting the addresses of nearby cellphone towers even if all location services were turned off. The company has also been caught using its Street View vehicles to collect the Wi-Fi location data from phones and computers.

This is why Sidewalk Labs has instituted significant protections to safeguard privacy, before it even begins creating a synthetic population. Any location data that Sidewalk Labs receives is already de-identified (using methods such as aggregation, differential privacy techniques, or outright removal of unique behaviors). Bowden explained that the data obtained by Replica does not include a device’s unique identifiers, which can be used to uncover someone’s unique identity.

However, some urban planners and technologists, while emphasizing the elegance and novelty of the program’s concept, remain skeptical about these privacy protections, asking how Sidewalk Labs defines personally identifiable information. Tamir Israel, a staff lawyer at the Canadian Internet Policy & Public Interest Clinic, warns that re-identification is a rapidly moving target. If Sidewalk Labs has access to people’s unique paths of movement prior to making its synthetic models, wouldn’t it be possible to figure out who they are, based on where they go to sleep or work? “We see a lot of companies erring on the side of collecting it and doing coarse de-identifications, even though, more than any other type of data, location data has been shown to be highly re-identifiable,” he added. “It’s obvious what home people leave and return to every night and what office they stop at every day from 9 to 5 p.m.” A landmark study uncovered the extent to which people could be re-identified from seemingly-anonymous data using just four time-stamped data points of where they’ve previously been.

Source: Google’s Sidewalk Labs Plans to Package and Sell Location Data on Millions of Cellphones

Firefox cracks down on creepy web trackers, holds supercookies over fire whilst Chrome kills ad blockers

The Mozilla Foundation has announced its intent to reduce the ability of websites and other online services to track users of its Firefox browser around the internet.

At this stage, Moz’s actions are baby steps. In support of its decision in late 2018 to reduce the amount of tracking it permits, the organisation has now published a tracking policy to tell people what it will block.

Moz said the focus of the policy is to bring the curtain down on tracking techniques that “cannot be meaningfully understood or controlled by users”.

Notoriously intrusive tracking techniques allow users to be followed and profiled around the web. Facebook planting trackers wherever a site has a “Like” button is a good example. A user without a Facebook account can still be tracked as a unique individual as they visit different news sites.

Mozilla’s policy said these “stateful identifiers are often used by third parties to associate browsing across multiple websites with the same user and to build profiles of those users, in violation of the user’s expectation”. So, out they go.

Source: Mozilla security policy cracks down on creepy web trackers, holds supercookies over fire • The Register

I’m pretty sure which browser you should be using