The Linkielist

Linking ideas with the world

The Linkielist

Chelsea Manning jailed for refusing to testify on Wikileaks

Former Army intelligence analyst Chelsea Manning, who served years in prison for leaking one of the largest troves of classified documents in U.S. history, has been sent to jail for refusing to testify before a grand jury investigating Wikileaks.

U.S. District Judge Claude Hilton ordered Manning to jail for contempt of court Friday after a brief hearing in which Manning confirmed she has no intention of testifying. She told the judge she “will accept whatever you bring upon me.”

Manning has said she objects to the secrecy of the grand jury process, and that she already revealed everything she knows at her court-martial.

The judge said she will remain jailed until she testifies or until the grand jury concludes its work.

[…]

Manning anticipated being jailed. In a statement before Friday’s hearing, she said she invoked her First, Fourth and Sixth amendment protections when she appeared before the grand jury in Alexandria on Wednesday. She said she already answered every substantive question during her 2013 court-martial, and is prepared to face the consequences of refusing to answer again.

“In solidarity with many activists facing the odds, I will stand by my principles. I will exhaust every legal remedy available,” she said.

Manning served seven years of a 35-year military sentence for leaking a trove of military and diplomatic documents to the anti-secrecy website before then-President Barack Obama commuted her sentence.

Source: Chelsea Manning jailed for refusing to testify on Wikileaks

Researchers are training image-generating AI with fewer labels by letting the model infer the labels

Generative AI models have a propensity for learning complex data distributions, which is why they’re great at producing human-like speech and convincing images of burgers and faces. But training these models requires lots of labeled data, and depending on the task at hand, the necessary corpora are sometimes in short supply.

The solution might lie in an approach proposed by researchers at Google and ETH Zurich. In a paper published on the preprint server Arxiv.org (“High-Fidelity Image Generation With Fewer Labels“), they describe a “semantic extractor” that can pull out features from training data, along with methods of inferring labels for an entire training set from a small subset of labeled images. These self- and semi-supervised techniques together, they say, can outperform state-of-the-art methods on popular benchmarks like ImageNet.

“In a nutshell, instead of providing hand-annotated ground truth labels for real images to the discriminator, we … provide inferred ones,” the paper’s authors explained.

In one of several unsupervised methods the researchers posit, they first extract a feature representation — a set of techniques for automatically discovering the representations needed for raw data classification — on a target training dataset using the aforementioned feature extractor. They then perform cluster analysis — i.e., grouping the representations in such a way that those in the same group share more in common than those in other groups. And lastly, they train a GAN — a two-part neural network consisting of generators that produce samples and discriminators that attempt to distinguish between the generated samples and real-world samples — by inferring labels.

Source: Researchers are training image-generating AI with fewer labels | VentureBeat

Google launches TensorFlow Lite 1.0 for mobile and embedded devices

Google today introduced TensorFlow Lite 1.0, its framework for developers deploying AI models on mobile and IoT devices. Improvements include selective registration and quantization during and after training for faster, smaller models. Quantization has led to 4 times compression of some models.

“We are going to fully support it. We’re not going to break things and make sure we guarantee its compatibility. I think a lot of people who deploy this on phones want those guarantees,” TensorFlow engineering director Rajat Monga told VentureBeat in a phone interview.

Lite begins with training AI models on TensorFlow, then is converted to create Lite models for operating on mobile devices. Lite was first introduced at the I/O developer conference in May 2017 and in developer preview later that year.

The TensorFlow Lite team at Google also shared its roadmap for the future today, designed to shrink and speed up AI models for edge deployment, including things like model acceleration, especially for Android developers using neural nets, as well as a Keras-based connecting pruning kit and additional quantization enhancements.

Other changes on the way:

  • Support for control flow, which is essential to the operation of models like recurrent neural networks
  • CPU performance optimization with Lite models, potentially involving partnerships with other companies
  • Expand coverage of GPU delegate operations and finalize the API to make it generally available

A TensorFlow 2.0 model converter to make Lite models will be made available for developers to better understand how things wrong in the conversion process and how to fix it.

TensorFlow Lite is deployed by more than two billion devices today, TensorFlow Lite engineer Raziel Alvarez said onstage at the TensorFlow Dev Summit being held at Google offices in Sunnyvale, California.

TensorFlow Lite increasingly makes TensorFlow Mobile obsolete, except for users who want to utilize it for training, but a solution is in the works, Alvarez said.

Source: Google launches TensorFlow Lite 1.0 for mobile and embedded devices | VentureBeat

Leaked Documents Show the U.S. Government Tracking Journalists and Immigration Advocates Through a Secret Database, having them detained at borders

One photojournalist said she was pulled into secondary inspections three times and asked questions about who she saw and photographed in Tijuana shelters. Another photojournalist said she spent 13 hours detained by Mexican authorities when she tried to cross the border into Mexico City. Eventually, she was denied entry into Mexico and sent back to the U.S.

These American photojournalists and attorneys said they suspected the U.S. government was monitoring them closely but until now, they couldn’t prove it.

Now, documents leaked to NBC 7 Investigates show their fears weren’t baseless. In fact, their own government had listed their names in a secret database of targets, where agents collected information on them. Some had alerts placed on their passports, keeping at least three photojournalists and an attorney from entering Mexico to work.

The documents were provided to NBC 7 by a Homeland Security source on the condition of anonymity, given the sensitive nature of what they were divulging.

The source said the documents or screenshots show a SharePoint application that was used by agents from Customs and Border Protection (CBP) Immigration and Customs Enforcement (ICE), the U.S. Border Patrol, Homeland Security Investigations and some agents from the San Diego sector of the Federal Bureau of Investigations (FBI).

The intelligence gathering efforts were done under the umbrella of “Operation Secure Line,” the operation designated to monitor the migrant caravan, according to the source.

The documents list people who officials think should be targeted for screening at the border.

The individuals listed include ten journalists, seven of whom are U.S. citizens, a U.S. attorney, and 47 people from the U.S. and other countries, labeled as organizers, instigators or their roles “unknown.” The target list includes advocates from organizations like Border Angels and Pueblo Sin Fronteras.

To view the documents, click here or the link below.

PHOTOS: Leaked Documents Show Government Tracking Journalists, Immigration AdvocatesPHOTOS: Leaked Documents Show Government Tracking Journalists, Immigration Advocates

NBC 7 Investigates is blurring the names and photos of individuals who haven’t given us permission to publish their information.

[…]

In addition to flagging the individuals for secondary screenings, the Homeland Security source told NBC 7 that the agents also created dossiers on each person listed.

“We are a criminal investigation agency, we’re not an intelligence agency,” the Homeland Security source told NBC 7 Investigates. “We can’t create dossiers on people and they’re creating dossiers. This is an abuse of the Border Search Authority.”

One dossier, shared with NBC 7, was on Nicole Ramos, the Refugee Director and attorney for Al Otro Lado, a law center for migrants and refugees in Tijuana, Mexico. The dossier included personal details on Ramos, including specific details about the car she drives, her mother’s name, and her work and travel history.

After sharing the documents with Ramos, she said Al Otro Lado is seeking more information on why she and other attorneys at the law center have been targeted by border officials.

“The document appears to prove what we have assumed for some time, which is that we are on a law enforcement list designed to retaliate against human rights defenders who work with asylum seekers and who are critical of CBP practices that violate the rights of asylum seekers,” Ramos told NBC 7 by email.

In addition to the dossier on Ramos, a list of other dossier files created was shared with NBC 7. Two of the dossier files were labeled with the names of journalists but no further details were available. Those journalists were also listed as targets for secondary screenings.

Customs and Border Protection has the authority to pull anyone into secondary screenings, but the documents show the agency is increasingly targeting journalists, attorneys, and immigration advocates. Former counterterrorism officials say the agency should not be targeting individuals based on their profession.

Source: Source: Leaked Documents Show the U.S. Government Tracking Journalists and Immigration Advocates Through a Secret Database – NBC 7 San Diego

When 2FA means sweet FA privacy: Facebook admits it slurps mobe numbers for more than just profile security

This time, the Silicon Valley giant has been caught red-handed using people’s cellphone numbers, provided exclusively for two-factor authentication, for targeted advertising and search – after it previously insinuated it wouldn’t do that.

Folks handing over their mobile numbers to protect their accounts from takeovers and hijackings thought the contact detail would be used for just that: security. Instead, Facebook is using the numbers to link netizens to other people, and target them with online ads.

For example, if someone you know – let’s call her Sarah – has given her number to Facebook for two-factor authentication purposes, and you allow the Facebook app to access your smartphone’s contacts book, and it sees Sarah’s number in there, it will offer to connect you two up, even though Sarah thought her number was being used for security only, and not for search. This is not a particularly healthy scenario, for instance, if you and Sarah are no longer, or never were, friends in real life, and yet Facebook wants to wire you up anyway.

Following online outcry over the weekend, a Facebook spokesperson told us today: “We appreciate the feedback we’ve received about these settings, and will take it into account.”

Source: When 2FA means sweet FA privacy: Facebook admits it slurps mobe numbers for more than just profile security • The Register

Anyone surprised much?

Welding glass to metal breakthrough could transform manufacturing

Scientists from Heriot-Watt University have welded glass and metal together using an ultrafast laser system, in a breakthrough for the manufacturing industry.

Various optical materials such as quartz, borosilicate glass and even sapphire were all successfully welded to metals like aluminium, titanium and using the Heriot-Watt laser system, which provides very short, picosecond pulses of infrared light in tracks along the materials to fuse them together.

The new process could transform the and have direct applications in the aerospace, defence, optical technology and even healthcare fields.

Professor Duncan Hand, director of the five-university EPSRC Centre for Innovative Manufacturing in Laser-based Production Processes based at Heriot-Watt, said: “Traditionally it has been very difficult to weld together dissimilar materials like glass and metal due to their different thermal properties—the and highly different thermal expansions involved cause the glass to shatter.

“Being able to weld glass and metals together will be a huge step forward in manufacturing and design flexibility.

“At the moment, equipment and products that involve and metal are often held together by adhesives, which are messy to apply and parts can gradually creep, or move. Outgassing is also an issue—organic chemicals from the adhesive can be gradually released and can lead to reduced product lifetime.

“The process relies on the incredibly short pulses from the laser. These pulses last only a few picoseconds—a picosecond to a second is like a second compared to 30,000 years.

“The parts to be welded are placed in close contact, and the laser is focused through the optical material to provide a very small and highly intense spot at the interface between the two —we achieved megawatt peak power over an area just a few microns across.

“This creates a microplasma, like a tiny ball of lightning, inside the material, surrounded by a highly-confined melt region.

“We tested the welds at -50C to 90C and the welds remained intact, so we know they are robust enough to cope with extreme conditions.”

Read more at: https://phys.org/news/2019-03-welding-breakthrough.html#jCp

Source: Welding breakthrough could transform manufacturing

SPOILER alert, literally: Intel CPUs afflicted with simple data-spewing spec-exec vulnerability

Further demonstrating the computational risks of looking into the future, boffins have found another way to abuse speculative execution in Intel CPUs to steal secrets and other data from running applications.

This security shortcoming can be potentially exploited by malicious JavaScript within a web browser tab, or malware running on a system, or rogue logged-in users, to extract passwords, keys, and other data from memory. An attacker therefore requires some kind of foothold in your machine in order to pull this off. The vulnerability, it appears, cannot be easily fixed or mitigated without significant redesign work at the silicon level.

Speculative execution, the practice of allowing processors to perform future work that may or may not be needed while they await the completion of other computations, is what enabled the Spectre vulnerabilities revealed early last year.

In a research paper distributed this month through pre-print service ArXiv, “SPOILER: Speculative Load Hazards Boost Rowhammer and Cache Attacks,” computer scientists at Worcester Polytechnic Institute in the US, and the University of Lübeck in Germany, describe a new way to abuse the performance boost.

The researchers – Saad Islam, Ahmad Moghimi, Ida Bruhns, Moritz Krebbel, Berk Gulmezoglu, Thomas Eisenbarth and Berk Sunar – have found that “a weakness in the address speculation of Intel’s proprietary implementation of the memory subsystem” reveals memory layout data, making other attacks like Rowhammer much easier to carry out.

The researchers also examined Arm and AMD processor cores, but found they did not exhibit similar behavior.

“We have discovered a novel microarchitectural leakage which reveals critical information about physical page mappings to user space processes,” the researchers explain.

“The leakage can be exploited by a limited set of instructions, which is visible in all Intel generations starting from the 1st generation of Intel Core processors, independent of the OS and also works from within virtual machines and sandboxed environments.”

 

Source: SPOILER alert, literally: Intel CPUs afflicted with simple data-spewing spec-exec vulnerability • The Register

Apples’ Shazam for iOS Sheds 3rd Party SDKs. Keeps pumping your data through on Android.

Shazam, the song identification app Apple bought for $400M, recently released an update to its iOS app that got rid of all 3rd party SDKs the app was using except for one.

The SDKs that were removed include ad networks, analytics trackers, and even open-source utilities. Why, you ask? Because all of those SDKs leak usage data to 3rd parties one way or another, something Apple really really dislikes.

Here are all the SDKs that were uninstalled in the latest update:

AdMob
Bolts
DoubleClick
FB Ads
FB Analytics
FB Login
InMobi
IAS
Moat
MoPub

Right now, the app only has one 3rd party SDK installed and that’s HockeyApp. Microsoft’s version of TestFlight. It’s unclear why it’s still there, but we don’t expect it to stick around for too long.

Looking across Apple’s entire app portfolio it’s very uncommon to see 3rd party SDKs at all. Exceptions exist. One such example is Apple’s Support app which has the Adobe Analytics SDK installed.

Things Are Different on Android

Since Shazam is also available for Android we expected to see the same behavior. A mass uninstall of 3rd party SDKs. At first glance it seems to be the case, but not exactly.

Here are all the SDKs that were uninstalled in the last update:

AdColony
AdMob
Amazon Ads
Ads
FB Analytics
Gimbal
Google IMA
MoPub

Here are all the SDKs that are still installed in Shazam for Android:

Bolts
FB Analytics
Butter Knife
Crashlytics
Fabric
Firebase
Google Maps
OKHttp
Otto

On Android, Apple seems to be ok with leaking usage data to both Facebook through the Facebook Login SDK and Google through Fabric and Google Maps, indicating Apple hasn’t built out its internal set of tools for Android.

It’s also worth noting that HockeyApp was removed from Shazam from Android more than a year ago.

Want to see which SDKs apps have installed? Check out Explorer, the most comprehensive SDK Intelligence platform for iOS and Android apps.

Source: Shazam for iOS Sheds 3rd Party SDKs | App store Insights from Appfigures

Facebook receives personal health data from apps, even if you don’t have a FB account

Facebook receives highly personal information from apps that track your health and help you find a new home, testing by The Wall Street Journal found. Facebook can receive this data from certain apps even if the user does not have a Facebook account, according to the Journal.

Facebook has already been in hot water concerning issues of consent and user data.

Most recently, a TechCrunch report revealed in January that Facebook paid users as young as teenagers to install an app that would allow the company to collect all phone and web activity. Following the report, Apple revoked some developer privileges from Facebook, saying Facebook violated its terms by distributing the app through a program meant only for employees to test apps prior to release.

The new report said Facebook is able to receive data from a variety of apps. Of more than 70 popular apps tested by the Journal, they found at least 11 apps that sent potentially sensitive information to Facebook.

The apps included the period-tracking app Flo Period & Ovulation Tracker, which reportedly shared with Facebook when users were having their periods or when they indicated they were trying to get pregnant. Real estate app Realtor reportedly sent Facebook the listing information viewed by users, and the top heart-rate app on Apple’s iOS, Instant Heart Rate: HR Monitor, sent users’ heart rates to the company, the Journal’s testing found.

The apps reportedly send the data using Facebook’s software-development kit, or SDK, which help developers integrate certain features into their apps. Facebook’s SDK includes an analytics service that helps app developers understand its users’ trends. The Journal said developers who sent sensitive information to Facebook used “custom app events” to send data like ovulation times and homes that users had marked as favorites on some apps.

A Facebook spokesperson told CNBC, “Sharing information across apps on your iPhone or Android device is how mobile advertising works and is industry standard practice. The issue is how apps use information for online advertising. We require app developers to be clear with their users about the information they are sharing with us, and we prohibit app developers from sending us sensitive data. We also take steps to detect and remove data that should not be shared with us.”

Source: Facebook receives personal health data from apps: WSJ

W3C approves WebAuthn as the web standard for password-free logins using FIDO2

The World Wide Web Consortium (W3C) today declared that the Web Authentication API (WebAuthn) is now an official web standard. First announced by the W3C and the FIDO Alliance in November 2015, WebAuthn is now an open standard for password-free logins on the web. It is supported by W3C contributors, including Airbnb, Alibaba, Apple, Google, IBM, Intel, Microsoft, Mozilla, PayPal, SoftBank, Tencent, and Yubico.

The specification lets users log into online accounts using biometrics, mobile devices, and/or FIDO security keys. WebAuthn is supported by Android and Windows 10. On the browser side, Google Chrome, Mozilla Firefox, and Microsoft Edge all added support last year. Apple has supported WebAuthn in preview versions of Safari since December.

Killing the password

“Now is the time for web services and businesses to adopt WebAuthn to move beyond vulnerable passwords and help web users improve the security of their online experiences,” W3C CEO Jeff Jaffe said in a statement. “W3C’s Recommendation establishes web-wide interoperability guidance, setting consistent expectations for web users and the sites they visit. W3C is working to implement this best practice on its own site.”

Although the W3C hasn’t adopted its own creation yet, WebAuthn is already implemented on sites such as Dropbox, Facebook, GitHub, Salesforce, Stripe, and Twitter. Now that WebAuthn is an official standard, the hope is that other sites will jump on board as well, leading to more password-free logins across the web.

But it’s not just the web. The FIDO Alliance wants to kill the password everywhere, a goal it has been working on for years and will likely still be working on for years to come.

FIDO2

W3C’s WebAuthn recommendation is a core component of the FIDO Alliance’s FIDO2 set of specifications. FIDO2 is a standard that supports public key cryptography and multifactor authentication — specifically, the Universal Authentication Framework (UAF) and Universal Second Factor (U2F) protocols. To help spur adoption, the FIDO Alliance provides testing tools and a certification program.

FIDO2 attempts to address traditional authentication issues in four ways:

  • Security: FIDO2 cryptographic login credentials are unique across every website; biometrics or other secrets like passwords never leave the user’s device and are never stored on a server. This security model eliminates the risks of phishing, all forms of password theft, and replay attacks.
  • Convenience: Users log in with simple methods such as fingerprint readers, cameras, FIDO security keys, or their personal mobile device.
  • Privacy: Because FIDO keys are unique for each internet site, they cannot be used to track users across sites.
  • Scalability: Websites can enable FIDO2 via an API call across all supported browsers and platforms on billions of devices consumers use every day.

“The Web Authentication component of FIDO2 is now an official web standard from W3C, an important achievement that represents many years of industry collaboration to develop a practical solution for phishing-resistant authentication on the web,” FIDO Alliance executive director Brett McDowell said in a statement. “With this milestone, we’re moving into the next phase of our shared mission to deliver simpler, stronger authentication to everyone using the internet today, and for years to come.”

Source: W3C approves WebAuthn as the web standard for password-free logins

Missing Out On Deep Sleep Causes Alzheimer’s Plaques to Build Up

Getting enough deep sleep might be the key to preventing dementia. In a series of recent experiments on mice, researchers discovered that deep sleep helps the brain clear out potentially toxic waste. The discovery reinforces how critical quality sleep is for brain health and suggests sleep therapies might curb the advance of memory-robbing ailments, like Alzheimer’s disease.

Alzheimer’s disease is a major problem for the patients, their families and society,” said Maiken Nedergaard, a neurologist at the University of Rochester Medical Center in New York, who led the new research. “Understanding how sleep can improve clearance of amyloid could have major impact on treatment.”

Clearing The Clutter

Cerebrospinal fluid churns through a system of brain tunnels piped in the spaces between brain cells and blood vessels. Scientists call it the glymphatic system. This system circulates nutrients like glucose, the brain’s primary energy source, and washes away potentially toxic waste.

And it may be the reason why animals even need sleep. The system takes out the brain’s trash when we’re asleep, and it shuts down when we’re awake. Nedergaard and her team were curious if the system works best and clears more waste — like Alzheimer’s causing beta amyloid plaque — when animals are in deep sleep.

To find out, the researchers used six different anesthetics to put mice into deep sleep. Then they tracked cerebrospinal fluid as it flowed into the brain. As the mice slept, the researchers watched the rodents’ brain activity on an electroencephalograph, or EEG, and recorded the animals’ blood pressures and heart and respiratory rates.

Rest And Restore

Mice anesthetized with a combination of two drugs, ketamine and xylazine, showed the strongest deep sleep brain waves and these brain waves predicted CSF flow into the brain, the researchers found. Their findings imply that the glymphatic system is indeed more active during the deepest sleep.

When the researchers analyzed the mice’s vital signs, they were surprised to find the animals anesthetized with the deep sleep drug combo of ketamine and xylazine also had the lowest heart rates, Nedergaard and her team report Wednesday in the journal Science Advances. The discovery means “low heart rate, which is a characteristic of athletes, is also a potent enhancer of glymphatic flow,” Nedergaard said. The results may explain why exercise buffers against poor memory.

The findings also have implications for people undergoing surgery. General anesthesia as well as long-term sedation in the intensive care unit is associated with delirium and difficulty with memory, especially in the elderly.

But most importantly, the research shows quality sleep is vital for brain health. “Focusing on sleep in the early stages of dementia might be able to slow progression of the disease,” Nedergaard said.

Source: Missing Out On Deep Sleep Causes Alzheimer’s Plaques to Build Up – D-brief

Massive Database Leak Gives Us a Window into China’s Digital Surveillance State

Earlier this month, security researcher Victor Gevers found and disclosed an exposed database live-tracking the locations of about 2.6 million residents of Xinjiang, China, offering a window into what a digital surveillance state looks like in the 21st century.

Xinjiang is China’s largest province, and home to China’s Uighurs, a Turkic minority group. Here, the Chinese government has implemented a testbed police state where an estimated 1 million individuals from these minority groups have been arbitrarily detained. Among the detainees are academics, writers, engineers, and relatives of Uighurs in exile. Many Uighurs abroad worry for their missing family members, who they haven’t heard from for several months and, in some cases, over a year.

Although relatively little news gets out of Xinjiang to the rest of the world, we’ve known for over a year that China has been testing facial-recognition tracking and alert systems across Xinjiang and mandating the collection of biometric data—including DNA samples, voice samples, fingerprints, and iris scans—from all residents between the ages of 12 and 65. Reports from the province in 2016 indicated that Xinjiang residents can be questioned over the use of mobile and Internet tools; just having WhatsApp or Skype installed on your phone is classified as “subversive behavior.” Since 2017, the authorities have instructed all Xinjiang mobile phone users to install a spyware app in order to “prevent [them] from accessing terrorist information.”

The prevailing evidence of mass detention centers and newly-erected surveillance systems shows that China has been pouring billions of dollars into physical and digital means of pervasive surveillance in Xinjiang and other regions. But it’s often unclear to what extent these projects operate as real, functional high-tech surveillance, and how much they are primarily intended as a sort of “security theater”: a public display of oppression and control to intimidate and silence dissent.

Now, this security leak shows just how extensively China is tracking its Xinjiang residents: how parts of that system work, and what parts don’t. It demonstrates that the surveillance is real, even as it raises questions about the competence of its operators.

A Brief Window into China’s Digital Police State

Earlier this month, Gevers discovered an insecure MongoDB database filled with records tracking the location and personal information of 2.6 million people located in the Xinjiang Uyghur Autonomous Region. The records include individuals’ national ID number, ethnicity, nationality, phone number, date of birth, home address, employer, and photos.

Over a period of 24 hours, 6.7 million individual GPS coordinates were streamed to and collected by the database, linking individuals to various public camera streams and identification checkpoints associated with location tags such as “hotel,” “mosque,” and “police station.” The GPS coordinates were all located within Xinjiang.

This database is owned by the company SenseNets, a private AI company advertising facial recognition and crowd analysis technologies.

A couple of days later, Gevers reported a second open database tracking the movement of millions of cars and pedestrians. Violations like jaywalking, speeding, and going through a red-light are detected, trigger the camera to take a photo, and ping a WeChat API, presumably to try and tie the event to an identity.

Database Exposed to Anyone with an Internet Connection for Half a Year

China may have a working surveillance program in Xinjiang, but it’s a shockingly insecure security state. Anyone with an Internet connection had access to this massive honeypot of information.

Gevers also found evidence that these servers were previously accessed by other known global entities such as a Bitcoin ransomware actor, who had left behind entries in the database. To top it off, this server was also vulnerable to several known exploits.

In addition to this particular surveillance database, a Chinese cybersecurity firm revealed that at least 468 MongoDB servers had been exposed to the public Internet after Gevers and other security researchers started reporting them. Among these instances: databases containing detailed information about remote access consoles owned by China General Nuclear Power Group, and GPS coordinates of bike rentals.

A Model Surveillance State for China

China, like many other state actors, may simply be willing to tolerate sloppy engineering if its private contractors can reasonably claim to be delivering the goods. Last year, the government spent an extra $3 billion on security-related construction in Xinjiang, and the New York Times reported that China’s police planned to spend an additional $30 billion on surveillance in the future. Even poorly-executed surveillance is massively expensive, and Beijing is no doubt telling the people of Xinjiang that these investments are being made in the name of their own security. But the truth, revealed only through security failures and careful security research, tells a different story: China’s leaders seem to care little for the privacy, or the freedom, of millions of its citizens.

Source: Massive Database Leak Gives Us a Window into China’s Digital Surveillance State | Electronic Frontier Foundation

Scientists turn CO2 ‘back into coal’ in breakthrough carbon capture experiment

The research team led by RMIT University in Melbourne, Australia, developed a new technique using a liquid metal electrolysis method which efficiently converts CO2 from a gas into solid particles of carbon.

Published in the journal Nature Communications, the authors say their technology offers an alternative pathway for “safely and permanently” removing CO2 from the atmosphere.

Current carbon capture techniques involve turning the gas into a liquid and injecting it underground, but its use is not widespread due to issues around economic viability, and environmental concerns about leaks from the storage site.

The new technique results in solid flakes of carbon, similar to coal, which may be easier to store safely.

To convert CO2, the researchers designed a liquid metal catalyst with specific surface properties that made it extremely efficient at conducting electricity while chemically activating the surface.

The carbon dioxide is dissolved in a beaker filled with an electrolyte liquid along with a small amount of the liquid metal, which is then charged with an electrical current.

The CO2 slowly converts into solid flakes, which are naturally detached from the liquid metal surface, allowing for continuous production.

RMIT researcher Dr Torben Daeneke said: “While we can’t literally turn back time, turning carbon dioxide back into coal and burying it back in the ground is a bit like rewinding the emissions clock.”

“To date, CO2 has only been converted into a solid at extremely high temperatures, making it industrially unviable.

“By using liquid metals as a catalyst, we’ve shown it’s possible to turn the gas back into carbon at room temperature, in a process that’s efficient and scalable.

“While more research needs to be done, it’s a crucial first step to delivering solid storage of carbon.”

Lead author, Dr Dorna Esrafilzadeh said the carbon produced by the technique could also be used as an electrode.

“A side benefit of the process is that the carbon can hold electrical charge, becoming a supercapacitor, so it could potentially be used as a component in future vehicles,” she said.

“The process also produces synthetic fuel as a by-product, which could also have industrial applications.”

Source: Scientists turn CO2 ‘back into coal’ in breakthrough carbon capture experiment | The Independent

Google’s DeepMind can predict wind energy income a day in advance

Wind power has become increasingly popular, but its success is limited by the fact that wind comes and goes as it pleases, making it hard for power grids to count on the renewable energy and less likely to fully embrace it. While we can’t control the wind, Google has an idea for the next best thing: using machine learning to predict it.

Google and DeepMind have started testing machine learning on Google’s own wind turbines, which are part of the company’s renewable energy projects. Beginning last year, they fed weather forecasts and existing turbine data into DeepMind’s machine learning platform, which churned out wind power predictions 36 hours ahead of actual power generation. Google could then make supply commitments to power grids a full day before delivery. That predictability makes it easier and more appealing for energy grids to depend on wind power, and as a result, it boosted the value of Google’s wind energy by roughly 20 percent.

Not only does this tease to how machine learning could boost the adoption of wind energy, it’s also an example of machine learning being put to good use — solving critical problems and not just jumping into your text thread to recommend a restaurant when you start talking about tapas. For DeepMind, it’s a high-profile use of its technology and proof that it’s not only useful for beating up professional StarCraft II players.

Source: Google’s DeepMind can predict wind patterns a day in advance

Studies Keep Showing That the Best Way to Stop Piracy Is to Offer Cheaper, Better Alternatives

Study after study continues to show that the best approach to tackling internet piracy is to provide these would-be customers with high quality, low cost alternatives.

For decades the entertainment industry has waged a scorched-earth assault on internet pirates. Usually this involves either filing mass lawsuits against these users, or in some instances trying to kick them off of the internet entirely. These efforts historically have not proven successful.

Throughout that time, data has consistently showcased how treating such users like irredeemable criminals may not be the smartest approach. For one, studies show that pirates are routinely among the biggest purchasers of legitimate content, and when you provide these users access to above-board options, they’ll usually take you up on the proposition.

That idea was again supported by a new study this week out of New Zealand first spotted by TorrentFreak. The study, paid for by telecom operator Vocus Group, surveyed a thousand New Zealanders last December, and found that while half of those polled say they’ve pirated content at some point in their lives, those numbers have dropped as legal streaming alternatives have flourished.

The study found that 11 percent of New Zealand consumers still obtain copyrighted content via illegal streams, and 10 percent download infringing content via BitTorrent or other platforms. But it also found that users are increasingly likely to obtain that same content via over the air antennas (75 percent) or legitimate streaming services like Netflix (55 percent).

“In short, the reason people are moving away from piracy is that it’s simply more hassle than it’s worth,” says Vocus Group NZ executive Taryn Hamilton said in a statement.

Historically, the entertainment industry has attempted to frame pirates as freeloaders exclusively interested in getting everything for free. In reality, it’s wiser to view them as frustrated potential consumers who’d be happy to pay for content if it was more widely available, Hamilton noted.

“The research confirms something many internet pundits have long instinctively believed to be true: piracy isn’t driven by law-breakers, it’s driven by people who can’t easily or affordably get the content they want,” she said.

But it’s far more than just instinct. Studies from around the world consistently come to the same conclusion, says Annemarie Bridy, a University of Idaho law professor specializing in copyright.

Bridy pointed to a number of international, US, and EU studies that all show that users will quickly flock to above-board options when available. Especially given the potential privacy and security risks involved in downloading pirated content from dubious sources.

“This is especially true given that “pirate sites” are now commonly full of malware and other malicious content, making them risky for users,” Bridy said. “It seems like a no-brainer that when you lower barriers to legal content acquisition in the face of rising barriers to illegal content acquisition, users opt for legal content.”

Source: Studies Keep Showing That the Best Way to Stop Piracy Is to Offer Cheaper, Better Alternatives – Motherboard

Ready for another fright? Spectre flaws in today’s computer chips can be exploited to hide, run stealthy malware

Co-authored by three computer science boffins from the University of Colorado, Boulder in the US – Jack Wampler, Ian Martiny, and Eric Wustrow – the paper, “ExSpectre: Hiding Malware in Speculative Execution,” describes a way to compile malicious code into a seemingly innocuous payload binary, so it can be executed through speculative execution without detection.

Speculative execution is a technique in modern processors that’s used to improve performance, alongside out-of-order execution and branch prediction. CPUs will speculate about future instructions and execute them, keeping the results and saving time if they’ve guessed the program path correctly and discarding them if not.

But last year’s Spectre flaws showed that sensitive transient data arising from these forward-looking calculations can be exfiltrated and abused. Now it turns out that this feature of chip architecture can be used to conceal malicious computation in the “speculative world.”

The Boulder-based boffins have devised a way in which a payload program and a trigger program can interact to perform concealed calculations. The payload and trigger program would be installed through commonly used attack vectors (e.g. trojan code, a remote exploit, or phishing) and need to run on the same CPU. The trigger program can also take the form of special input to the payload or a resident application that interacts with the payload program.

“When a separate trigger program runs on the same machine, it mistrains the CPU’s branch predictor, causing the payload program to speculatively execute its malicious payload, which communicates speculative results back to the rest of the payload program to change its real-world behavior,” the paper explains.

The result is stealth malware. It defies detection through current reverse engineering techniques because it executes in a transient environment not accessible to static or dynamic analysis used by most current security engines. Even if the trigger program is detected and removed the payload code will remain operating.

There are limits to this technique, however. Among other constraints, the malicious code can only consist of somewhere between one hundred and two hundred instructions. And the rate at which data can be obtained isn’t particularly speedy: the researchers devised a speculative primitive that could decrypt 1KB of data and exfiltrate it at a rate of 5.38 Kbps, assuming 20 redundant iterations to ensure data correctness.

Source: Ready for another fright? Spectre flaws in today’s computer chips can be exploited to hide, run stealthy malware • The Register

Amazon Ring Doorbell allows people to eavesdrop with video and even insert footage

Plaintext transmission of audio/video footage to the Ring application allows for arbitrary surveillance and injection of counterfeit traffic, effectively compromising home security (CVE-2019-9483).

[…]

We moved over to sniffing the application. Here we see a more sensible SIP/TLS approach, with pretty much all notifications, updates and information being passed via HTTPS. However, the actual RTP traffic seems plain!

The data seems sensible, and therefore we might be able to extract it. Using our handy videosnarf utility, we get a viewable MPEG file. This means anyone with access to incoming packets can see the feed! Similarly, we can also extract the audio G711 encoded stream.

[…]

Capturing the Doorbell feed is already great, but why stop there when we can inject our own? We developed a POC, whereby we first captured real footage in a so-called “recon mode”. Then, in “active mode” we can drop genuine traffic and inject the acquired footage. This hack works smoothly and is undetectable from within the app. In Mobile World Congress 2019, we publicly demonstrated the attack.

                                                Is it really Jesus at the door?

The attack scenarios possible are far too numerous to list, but for example imagine capturing an Amazon delivery and then streaming this feed. It would make for a particularly easy burglary. Spying on the doorbell allows for gathering of sensitive information – household habits, names and details about family members including children, all of which make the target an easy prey for future exploitation. Letting the babysitter in while kids are at home could be a potentially life threatening mistake.

                                 Are you sure about letting this killer clown in ?

The main takeaway from this research is that security is only as strong as its weakest link. Encrypting the upstream RTP traffic will not make forgery any harder if the downstream traffic is not secure, and encrypting the downstream SIP transmission does not thwart stream interception. When dealing with such sensitive data like a doorbell, secure transmission is not a feature but a must, as the average user will not be aware of potential tampering.

Important note: Ring has patched this vulnerability in version 3.4.7 of the ring app (Without notifying users in the patch notes!). Please make sure to upgrade to a newer version ASAP as the affected versions are still backward compatible  and vulnerable.

Source: One Ring to rule them all, and in darkness bind them

Renewable energy policies actually work

For most of the industrial era, a nation’s carbon emissions moved in lock step with its economy. Growth meant higher emissions. But over the past decade or so, that has changed. Even as the global economy continued to grow, carbon emissions remained flat or dropped a bit.

It would be simple to ascribe this trend o the explosion in renewable energy, but reality is rarely so simple. Countries like China saw explosive growth in both renewables and fossil-fuel use; Germany and Japan expanded renewables even as they slashed nuclear power; and in the United States, the federal government has been MIA, leading to a chaotic mix of state and local efforts. So it’s worth taking a careful look into what exactly might be causing the drop in emissions.

That’s precisely what an international group of researchers has now done, analyzing what’s gone on in 79 countries, including some that have dropped emissions, and others that have not. The researchers find that renewable energy use is a big factor, but so is reduced energy use overall. And for both of these factors, government policy appears to play a large role.

Who’s losing?

The researchers started by identifying countries that show a “peak and decline” pattern of carbon emissions since the 1990s. They came up with 18, all but one of them in Europe—the exception is the United States. For comparison, they created two different control groups of 30 countries, neither of which has seen emissions decline. One group saw high GDP growth, while the second saw moderate economic growth; in the past, these would have been associated with corresponding changes in emissions.

Within each country, the researchers looked into whether there were government energy policies that could influence the trajectory of emissions. They also examined four items that could drive changes in emissions: total energy use, share of energy provided by fossil fuels, the carbon intensity of the overall energy mix, and efficiency (as measured by energy losses during use).

On average, emissions in the decline group dropped by 2.4 percent over the decade between 2005 and 2015.

Half of this drop came from lowering the percentage of fossil fuels used, with renewables making a large contribution; another 35 percent came from a drop in energy use. But the most significant factor varied from country to country. Austria, Finland, and Sweden saw a drop in the share of fossil fuels within their energy mix. In contrast, a drop in total energy use was the biggest factor for France, Ireland, the Netherlands, Spain, and the United Kingdom. The US was an odd one out, with all four possible factors playing significant roles in causing emissions to drop.

For the two control groups, however, there was a single dominant factor: total energy use counted for 75 and 80 percent of the change in the low- and high-economic growth groups, respectively. But there was considerably more variability in the low-economic growth group. All of the high-growth group saw increased energy use contribute 60 percent of the growth in emissions or more. In contrast, some of the low-growth group actually saw their energy use drop.

Policy-driven change

So why are some countries so successful at dropping their emissions? Part of it is likely to be economic growth. While the countries did experience economic expansion over the study period, the growth was quite low (a bit over 1 percent), which implies that a booming economy could potentially reverse this progress.

But that’s likely to be only part of the answer. By 2015, the countries in the group that saw declining emissions had an average of 35 policies that promoted renewable energy and another 23 that promoted energy efficiency. Both of those numbers are significantly higher than the averages for the control groups. And there’s evidence that these policies are effective. The number of pro-efficiency policies correlated with the drop in energy use, while the number of renewable policies correlated with the drop in the share of fossil fuels.

The control group of rapidly expanding economies did see an effect of renewable energy policies in that the fraction of fossil-fuel use dropped—emissions went up because the total energy use expanded faster than renewables could offset it. Similarly, conservation policies correlated with a drop in the energy intensity of per unit of GDP. So in both those cases, the evidence is consistent with policies keeping matters from being worse than they might have been otherwise.

Overall, the evidence is clearly consistent with the idea that pro-renewable and efficiency policies work, lowering total energy use and the role of fossil fuels in providing that energy. But we haven’t reached the point where they have a large-enough impact that they can consistently offset the emissions associated with economic growth. And even in countries where overall emissions do drop, the effect isn’t large enough to help them reach the sort of deep emissions cuts needed to reach the goals set forth in the Paris Agreement.

The analysis isn’t sufficient to tell us what would need to change in order to see more consistent and dramatic effects. Additional or stronger policies might do the trick, but it’s also possible that they’ll hit a ceiling. In addition, policies not considered here—those promoting carbon capture, for example—might ultimately become critical.

Source: Renewable energy policies actually work | Ars Technica

Stonehenge: Geologists have found exactly where some rocks came from

Five thousand years after people in the British Isles began building Stonehenge, scientists now know precisely where some of the massive rocks came from and how they were unearthed.

A team of 12 geologists and archaeologists from across the United Kingdom unveiled research this month that traces some of the prehistoric monument’s smaller stones to two quarries in western Wales.
The team also found evidence of prehistoric tools, stone wedges and digging activity in those quarries, tracing them to around 3000 BC, the era when Stonehenge’s first stage was constructed.
It’s rock-solid evidence that humans were involved in moving these “bluestones” to where they sit today, a full 150 miles away, the researchers say.
Researchers traced the origin of Stonehenge's famous stones.

“It finally puts to rest long-standing arguments over whether the bluestones were moved by human agency or by glacial action,” University of Southampton Archeology Professor Joshua Pollard said in an email.
[…]
Scientists have long known the stones came from the Preseli Hills, but the new research helps disprove claims about the original rock locations made in 1923 by famous British geologist H.H. Thomas. The correct quarries, called Carn Goedog and Craig Rhos-y-felin, are on the north side of the hills — opposite their long-suspected location, the new findings indicate.
Scientists work to learn more about the source of the monument's rocks.

“By going back and looking in detail at the actual samples he studied, we have been able to show that none of his proposals stand up to scrutiny,” Bevins said.
Because the rocks are from the north side of Preseli Hills, the researchers think it’s more likely the massive stones were dragged over land from Wales to England, rather than transported on river tributaries located near the south side.
It’s also possible the rocks were first used to build a stone circle in the local area before being paraded to the Salisbury plains, according to the article in the journal, Antiquity.

Source: Stonehenge: Geologists have found exactly where some rocks came from – CNN

Incredible Experiment Gives Infrared Vision to Mice—and Humans Could Be Next

By injecting nanoparticles into the eyes of mice, scientists gave them the ability to see near-infrared light—a wavelength not normally visible to rodents (or people). It’s an extraordinary achievement, one made even more extraordinary with the realization that a similar technique could be used in humans.

Of all the remarkable things done to mice over the years, this latest achievement, described today in the science journal Cell, is among the most sci-fi.

A research team, led by Tian Xue from the University of Science and Technology of China and Gang Han from the University of Massachusetts Medical School, modified the vision of mice such that they were able to see near-infrared light (NIR), in addition to retaining their natural ability to see normal light. This was done by injecting special nanoparticles into their eyes, with the effect lasting for around 10 weeks and without any serious side effects.

[…]

Drops of fluid containing the tiny particles were injected directly in their eyes, where, using special anchors, they latched on tightly to photoreceptor cells. Photoreceptor cells—the rods and cones—normally absorb the wavelengths of incoming visible light, which the brain interprets as sight. In the experiment, however, the newly introduced nanoparticles upconverted incoming NIR into a visible wavelength, which the mouse brain was then capable of processing as visual information (in this case, they saw NIR as greenish light). The nanoparticles clung on for nearly two months, allowing the mice to see both NIR and visible light with minimal side effects.

Graphical representation of the process in action. When infrared light (red) reaches a photoreceptor cell (light green circle), the nanoparticles (pink circles) convert the light into visible green light.
Image: Cell

Essentially, the nanoparticles on the photoreceptor cells served as a transducer, or converter, for infrared light. The longer infrared wavelengths were captured in the retina by the nanoparticles, which then relayed them as shorter wavelengths within the visible light range. The rods and cones—which are built to absorb the shorter wavelengths—were thus able to accept this signal, and then send this upconverted information to the visual cortex for processing. Specifically, the injected particles absorbed NIR around 980 nanometers in wavelength and converted it to light in the area of 535 nanometers. For the mice, this translated to seeing the infrared light as the color green. The result was similar to seeing NIR with night-vision goggles, except that the mice were able to retain their normal view of visible light as well.

[…]

Looking ahead, Tian and Gang would like to improve the technique with organic-based nanoparticles comprised of FDA-approved compounds, which could result in even brighter infrared vision. They’d also like to tweak the technique to make it more responsive to human biology. Optimistic of where this technology is headed, Tian and Gang have already claimed a patent application related to their work.

I can already imagine the television commercials: “Ask your doctor if near-infrared vision is right for you.”

[Cell]

Source: Incredible Experiment Gives Infrared Vision to Mice—and Humans Could Be Next

How artificially brightened clouds could cool down the earth

Clouds, however, naturally reflect the sun (it’s why Venus – a planet with permanent cloud cover – shines so brightly in our night sky). Marine stratocumulus clouds are particularly important, covering around 20% of the Earth’s surface while reflecting 30% of total solar radiation. Stratocumulus clouds also cool the ocean surface directly below. Proposals to make these clouds whiter – or “marine cloud brightening” – are amongst the more serious projects now being considered by various bodies, including the US National Academies of Sciences, Engineering, and Medicine’s new “solar geoengineering” committee.

Stephen Salter, Emeritus professor at the University of Edinburgh, has been one of the leading voices of this movement. In the 1970s, when Salter was working on waves and tidal power, he came across studies examining the pollution trails left by shipping. Much like the aeroplane trails we see criss-crossing the sky, satellite imagery had revealed that shipping left similar tracks in the air above the ocean – and the research revealed that these trails were also brightening existing clouds.

The pollution particles had introduced “condensation nuclei” (otherwise scarce in the clean sea air) for water vapour to congregate around. Because the pollution particles were smaller than the natural particles, they produced smaller water droplets; and the smaller the water droplet, the whiter and more reflective it is. In 1990, British atmospheric scientist John Latham proposed doing this with benign, natural particles such as sea salt. But he needed an engineer to design a spraying system. So he contacted Stephen Salter.

(Credit: Nasa Goddard Space Flight Center)

The pollution trails left by ships on the ocean naturally brighten the clouds above (Credit: Nasa Goddard Space Flight Center)

Spraying about 10 cubic metres per second could undo all the [global warming] damage we’ve done to the world up till now

“I didn’t realise quite how hard it was going to be,” Salter now admits. Seawater, for instance, tends to clog up or corrode spray nozzles, let alone ones capable of spraying particles just 0.8 micron in size. And that’s not to mention the difficulties of modelling the effects on the weather and climate.  But his latest design, he believes, is ready to build: an unmanned hydro-foil ship, computer-controlled and wind-powered, which pumps an ultra-fine mist of sea salt toward the cloud layer.

“Spraying about 10 cubic metres per second could undo all the [global warming] damage we’ve done to the world up until now,” Salter claims. And, he says, the annual cost would be less than the cost to host the annual UN Climate Conference – between $100-$200 million each year.

Salter calculates that a fleet of 300 of his autonomous ships could reduce global temperatures by 1.5C. He also believes that smaller fleets could be deployed to counter-act regional extreme weather events. Hurricane seasons and El Niño, exacerbated by high sea temperatures, could be tamed by targeted cooling via marine cloud brightening. A PhD thesis from the University of Leeds in 2012 stated that cloud brightening could, “decrease sea surface temperatures during peak tropical cyclone season… [reducing] the energy available for convection and may reduce intensity of storms”.

Salter boasts that 160 of his ships could “moderate an El Niño event, and a few hundred [would] stop hurricanes”. The same could be done, he says, to protect large coral reefs such as the Great Barrier Reef, and even cool the polar regions to allow sea ice to return.

Hazard warning

So, what’s the catch? Well, there’s a very big catch indeed. The potential side-effects of solar geoengineering on the scale needed to slow hurricanes or cool global temperatures are not well understood. According to various theories, it could prompt droughts, flooding, and catastrophic crop failures; some even fear that the technology could be weaponised (during the Vietnam War, American forces flew thousands of “cloud seeding” missions to flood enemy troop supply lines). Another major concern is that geoengineering could be used as an excuse to slow down emissions reduction, meaning CO2 levels continue to rise and oceans continue to acidify – which, of course, brings its own serious problems.

(Credit: James MacNeill)

Stephen Salter believes that a fleet of 300 of his autonomous ships could reduce global temperatures by 1.5C (Credit: James MacNeill)

A rival US academic team – The MCB Project – is less gung-ho than Salter. Kelly Wanser, the principal director of The MCB Project, is based in Silicon Valley. When it launched in 2010 with seed funding from the Gates Foundation, it received a fierce backlash. Media articles talked of “cloud-wrenching cronies” and warned of the potential for “unilateral action on geoengineering”. Since then, Wanser has kept relatively low-key.

Her team’s design is similar to commercial snow-making machines for ski resorts, yet capable of spraying “particles ten thousand times smaller [than snow]… at three trillion particles per second”. The MCB Project hopes to test this near Monterey Bay, California, where marine stratocumulus clouds waft overland. They would start with a single cloud to track its impact.

“One of the strengths of marine cloud brightening is it can be very gradually scaled,” says Wanser. “You [can] get a pretty good grasp of whether and how you are brightening clouds, without doing things that impact climate or weather.”

Such a step-by-step research effort, says Wanser, would take a decade at least. But due to the controversy it attracts, this hasn’t even started yet. Not one cloud has yet been purposefully brightened by academics – although cargo shipping still does this unintentionally, with dirty particles, every single day.

Source: BBC – Future – How artificially brightened clouds could stop climate change

Plain wrong: Millions of utility customers’ passwords stored in plain text by website builder SEDC

In September of 2018, an anonymous independent security researcher (who we’ll call X) noticed that their power company’s website was offering to email—not reset!—lost account passwords to forgetful users. Startled, X fed the online form the utility account number and the last four phone number digits it was asking for. Sure enough, a few minutes later the account password, in plain text, was sitting in X’s inbox.

This was frustrating and insecure, and it shouldn’t have happened at all in 2018. But this turned out to be a flaw common to websites designed by the Atlanta firm SEDC. After finding SEDC’s copyright notices in the footer of the local utility company’s website, X began looking for more customer-facing sites designed by SEDC. X found and confirmed SEDC’s footer—and the same offer to email plain-text passwords—in more than 80 utility company websites.

Those companies service 15 million or so clients (estimated from GIS data and in some cases from PR brags on the utility sites themselves). But the real number of affected Americans could easily be several times that large: SEDC itself claims that more than 250 utility companies use its software.

Source: Plain wrong: Millions of utility customers’ passwords stored in plain text | Ars Technica

How to fake PDF signatures

If you open a PDF document and your viewer displays a panel (like you see below) indicating that

  1. the document is signed by invoicing@amazon.de and
  2. the document has not been modified since the signature was applied You assume that the displayed content is precisely what invoicing@amazon.de has created.

During recent research, we found out that this is not the case for almost all PDF Desktop Viewers and most Online Validation Services.

So what is the problem?

With our attacks, we can use an existing signed document (e.g., amazon.de invoice) and change the content of the document arbitrarily without invalidating the signatures. Thus, we can forge a document signed by invoicing@amazon.de to refund us one trillion dollars.

To detect the attack, you would need to be able to read and understand the PDF format in depth. Most people are probably not capable of such thing (PDF file example).

To recap this, you can use any signed PDF document and create a document which contains arbitrary content in the name of the signing user, company, ministry or state.

Source: PDF Signature Spoofing

Samsung is loading McAfee antivirus software on smart TVs – which may be impossible to uninstall

Samsung is adding bloatware to its 2019 TVs because McAfee is paying them to do so. There is arguably no reason for Samsung to offer third-party antivirus software for an operating system that is developed in-house.

Partnering with software vendors is fairly common practice for large hardware manufacturers. Laptop makers frequently pre-install bloatware in return for some sizable payouts and smartphone OEMs are no different. Samsung is now installing McAfee antivirus software on its 2019 TV lineup.

Samsung is claiming something to the effect of wanting to protect users from malware. On the surface that makes sense, but Samsung is running its very own Tizen OS on all TVs. Instead of adding more junk to a TV, why not just improve the OS? The answer though is self-explanatory. Samsung would not receive a payout from McAfee if it did not install the unneeded software.

Officially, here is Samsung’s statement on the matter.

McAfee extended its contract to have McAfee Security for TV technology pre-installed on all Samsung Smart TVs produced in 2019. Along with being the market leader in the Smart TV category worldwide, Samsung is also the first company to pre-install security on these devices, underscoring its commitment to building security in from the start. McAfee Security for TV scans the apps that run on Samsung smart TVs to identify and remove malware.

Downloading and installing apps on most TVs is a tedious process that most users are not doing very frequently. Well known apps such as Netflix and Hulu come pre-installed on most TVs regardless of brand, making it unnecessary for most users to ever even look at what other apps are available.

It may not be a big deal to have extra bloatware on a TV, but it is something undesirable and might burn a little more power for no actual benefit. If someone is going to take the time to target Tizen with malware and knowing that McAfee is pre=installed, there is little reason to believe a developer would not take the extra time to ensure detection does not happen.

Source: Samsung is loading McAfee antivirus software on smart TVs – TechSpot

<iframe width=”560″ height=”315″ src=”https://www.youtube.com/embed/bKgf5PaBzyg” frameborder=”0″ allow=”accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture” allowfullscreen></iframe>

China bans 23m from buying travel tickets as part of ‘social credit’ system.

China has blocked millions of “discredited” travellers from buying plane or train tickets as part of the country’s controversial “social credit” system aimed at improving the behaviour of citizens.

According to the National Public Credit Information Centre, Chinese courts banned would-be travellers from buying flights 17.5 million times by the end of 2018. Citizens placed on black lists for social credit offences were prevented from buying train tickets 5.5 million times. The report released last week said: “Once discredited, limited everywhere”.

The social credit system aims to incentivise “trustworthy” behaviour through penalties as well as rewards. According to a government document about the system dating from 2014, the aim is to “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step.”

Social credit offences range from not paying individual taxes or fines to spreading false information and taking drugs. More minor violations include using expired tickets, smoking on a train or not walking a dog on a leash.

[…]

According to the report, other penalties for individuals include being barred from buying insurance, real estate or investment products. Companies on the blacklist are banned from bidding on projects or issuing corporate bonds.

The report said authorities collected more than 14m data points of “untrustworthy conduct” last year, including scams, unpaid loans, false advertising and occupying reserved seats on a train.

Source: China bans 23m from buying travel tickets as part of ‘social credit’ system | World news | The Guardian