About Robin Edgar

Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft

A Misused Microsoft Tool Leaked Data from 47 Organizations

New research shows that misconfigurations of a widely used web tool have led to the leaking of tens of millions of data records.

Microsoft’s Power Apps, a popular development platform, allows organizations to quickly create web apps, replete with public facing websites and related backend data management. A lot of governments have used Power Apps to swiftly stand up covid-19 contact tracing interfaces, for instance.

However, incorrect configurations of the product can leave large troves of data publicly exposed to the web—which is exactly what has been happening.

Researchers with cybersecurity firm UpGuard recently discovered that as many as 47 different entities—including governments, large companies, and Microsoft itself—had misconfigured their Power Apps to leave data exposed.

The list includes some very large institutions, including the state governments of Maryland and Indiana and public agencies for New York City, such as the MTA. Large private companies, including American Airlines and transportation and logistics firm J.B. Hunt, have also suffered leaks.

UpGuard researchers write that the troves of leaked data has included a lot of sensitive stuff, including “personal information used for COVID-19 contact tracing, COVID-19 vaccination appointments, social security numbers for job applicants, employee IDs, and millions of names and email addresses.”

[…]

Following UpGuard’s disclosures, Microsoft has since shifted permissions and default settings related to Power Apps to make the product more secure.

Source: A Misused Microsoft Tool Leaked Data from 47 Organizations

OnlyFans CEO on why site is banning porn: ‘The short answer is banks’

After facing criticism over the app’s recent decision to prohibit sexually explicit content starting in October, OnlyFans CEO Tim Stokely pointed the finger at banks for the policy change.

In an interview with the Financial Times published Tuesday, Stokely singled out a handful of banks for “unfair” treatment, saying they made it “difficult to pay our creators.”

Source: OnlyFans CEO on why site is banning porn: ‘The short answer is banks’ – CNET

Belarus Hackers Seek to Overthrow Government, release huge trove of sensitive data

[…]

The Belarusian Cyber Partisans, as the hackers call themselves, have in recent weeks released portions of a huge data trove they say includes some of the country’s most secret police and government databases. The information contains lists of alleged police informants, personal information about top government officials and spies, video footage gathered from police drones and detention centers and secret recordings of phone calls from a government wiretapping system, according to interviews with the hackers and documents reviewed by Bloomberg News.

relates to Hackers Release Data Trove From Belarus in Bid to Overthrow Lukashenko Regime
A screenshot of footage the hackers obtained from inside Belarusian detention centers where protesters were held and allegedly beaten.
Source: Belarusian Cyber Partisans

Among the pilfered documents are personal details about Lukashenko’s inner circle and intelligence officers. In addition, there are mortality statistics indicating that thousands more people in Belarus died from Covid-19 than the government has publicly acknowledged, the documents suggest.

In an interview and on social media, the hackers said they also sabotaged more than 240 surveillance cameras in Belarus and are preparing to shut down government computers with malicious software named X-App.

[…]

the data exposed by the Cyber Partisans showed “that officials knew they were targeting innocent people and used extra force with no reason.” As a result, he said, “more people are starting to not believe in propaganda” from state media outlets, which suppressed images of police violence during anti-government demonstrations last year.

[…]

The hackers have teamed up with a group named BYPOL, created by former Belarusian police officers, who defected following the disputed election of Lukashenko last year. Mass demonstrations followed the election, and some police officers were accused of torturing and beating hundreds of citizens in a brutal crackdown.

[…]

The wiretapped phone recordings obtained by the hackers revealed that Belarus’s interior ministry was spying on a wide range of people, including police officers—both senior and rank-and-file—as well as officials working with the prosecutor general, according to Azarau. The recordings also offer audio evidence of police commanders ordering violence against protesters, he said.

[…]

Earlier this year, an affiliate of the group obtained physical access to a Belarus government facility and broke into the computer network while inside, the spokesman said. That laid the groundwork for the group to later gain further access, compromising some of the ministry’s most sensitive databases, he said. The stolen material includes the archive of secretly recorded phone conversations, which amounts to between 1 million and 2 million minutes of audio, according to the spokesman.

[…]

The hackers joined together in September 2020, after the disputed election. Their initial actions were small and symbolic, according to screenshots viewed by Bloomberg News. They hacked state news websites and inserted videos showing scenes of police brutality. They compromised a police “most wanted” list, adding the names of Lukashenko and his former interior minister, Yury Karayeu, to the list. And they defaced government websites with the red and white national flags favored by protesters over the official Belarusian red and green flag.

Those initial breaches attracted other hackers to the Cyber Partisans’ cause, and as it has grown, the group has become bolder with the scope of its intrusions. The spokesman said its aims are to protect the sovereignty and independence of Belarus and ultimately to remove Lukashenko from power.

[…]

Names and addresses of government officials and alleged informants obtained by the hackers have been shared with Belarusian websites, including Blackmap.org, that seek to “name and shame” people cooperating with the regime and its efforts to suppress peaceful protests, according to Viačorka and the websites themselves.

[…]

Source: Belarus Hackers Seek to Overthrow Local Government – Bloomberg

Samsung Galaxy Z Fold 3’s camera breaks after unlocking the bootloader

[…]

Samsung already makes it extremely difficult to have root access without tripping the security flags, and now the Korean OEM has introduced yet another roadblock for aftermarket development. In its latest move, Samsung disables the cameras on the Galaxy Z Fold 3 after you unlock the bootloader.

Knox is the security suite on Samsung devices, and any modifications to the device will trip it, void your warranty, and disable Samsung Pay permanently. Now, losing all the Knox-related security features is one thing, but having to deal with a broken camera is a trade-off that many will be unwilling to make. But that’s exactly what you’ll have to deal with if you wish to unlock the bootloader on the Galaxy Z Fold 3.

According to XDA Senior Members 白い熊 and ianmacd, the final confirmation screen during the bootloader unlock process on the Galaxy Z Fold 3 mentions that the operation will cause the camera to be disabled. Upon booting up with an unlocked bootloader, the stock camera app indeed fails to operate, and all camera-related functions cease to function, meaning that you can’t use facial recognition either. Anything that uses any of the cameras will time out after a while and give errors or just remain dark, including third-party camera apps.

Thanks to XDA Senior Member ianmacd for the images!

It is not clear why Samsung chose the way on which Sony walked in the past, but the actual problem lies in the fact that many will probably overlook the warning and unlock the bootloader without knowing about this new restriction. Re-locking the bootloader does make the camera work again, which indicates that it’s more of a software-level obstacle. With root access, it could be possible to detect and modify the responsible parameters sent by the bootloader to the OS to bypass this restriction. However, according to ianmacd, Magisk in its default state isn’t enough to circumvent the barrier.

[…]

Source: Samsung Galaxy Z Fold 3’s camera breaks after unlocking the bootloader

Dust-sized supercapacitor packs the same voltage as a AAA battery

By combining miniaturized electronics with some origami-inspired fabrication, scientists in Germany have developed what they say is the smallest microsupercapacitor in existence. Smaller than a speck of a dust but with a similar voltage to a AAA battery, the groundbreaking energy storage device is not only safe for use in the human body, but actually makes use of key ingredients in the blood to supercharge its performance.

[…]

These devices are known as biosupercapacitors and the smallest ones developed to date is larger than 3 mm3, but the scientists have made a huge leap forward in terms of how tiny biosupercapacitors can be. The construction starts with a stack of polymeric layers that are sandwiched together with a light-sensitive photo-resist material that acts as the current collector, a separator membrane, and electrodes made from an electrically conductive biocompatible polymer called PEDOT:PSS.

This stack is placed on a wafer-thin surface that is subjected to high mechanical tension, which causes the various layers to detach in a highly controlled fashion and fold up origami-style into a nano-biosupercapacitor with a volume 0.001 mm3, occupying less space than a grain of dust. These tubular biosupercapacitors are therefore 3,000 times smaller than those developed previously, but with a voltage roughly the same as an AAA battery (albeit with far lower actual current flow).

These tiny devices were then placed in saline, blood plasma and blood, where they demonstrated an ability to successfully store energy. The biosupercapacitor proved particularly effective in blood, where it retained up to 70 percent of its capacity after 16 hours of operation. Another reason blood may be a suitable home for the team’s biosupercapacitor is that the device works with inherent redox enzymatic reactions and living cells in the solution to supercharge its own charge storage reactions, boosting its performance by 40 percent.

Prof. Dr. Oliver G. Schmidt has led the development of a novel, tiny supercapacitor that is biocompatible

Prof. Dr. Oliver G. Schmidt has led the development of a novel, tiny supercapacitor that is biocompatible
Jacob Müller

The team also subjected the device to the forces it might experience in blood vessels where flow and pressure fluctuate, by placing them in microfluidic channels, kind of like wind-tunnel testing for aerodynamics, where it stood up well. They also used three of the devices chained together to successfully power a tiny pH sensor, which could be placed in the blood vessels to measure pH and detect abnormalities that could be indicative of disease, such as a tumor growth.

[…]

Source: Dust-sized supercapacitor packs the same voltage as a AAA battery

China puts continuous consent at the center of data protection law

[…] The new “Personal Information Protection Law of the People’s Republic of China” comes into effect on November 1st, 2021, and comprises eight chapters and 74 articles

[…]

The Cyberspace Administration of China (CAC) said, as translated from Mandarin using automated tools:

On the basis of relevant laws, the law further refines and perfects the principles and personal information processing rules to be followed in the protection of personal information, clarifies the boundaries of rights and obligations in personal information processing activities, and improves the work systems and mechanisms for personal information protection.

The document outlines standardized data-handling processes, defines rules on big data and large-scale operations, regulates those processing data, addresses data that flows across borders, and outlines legal enforcement of its provisions. It also clarifies that state agencies are not immune from these measures.

The CAC asserts that consenting to collection of data is at the core of China’s laws and the new legislation requires continual up-to-date fully informed advance consent of the individual. Parties gathering data cannot require excessive information nor refuse products or services if the individual disapproves. The individual whose data is collected can withdraw consent, and death doesn’t end the information collector’s responsibilities or the individual’s rights – it only passes down the right to control the data to the deceased subject’s family.

Information processors must also take “necessary measures to ensure the security of the personal information processed” and are required to set up compliance management systems and internal audits.

To collect sensitive data, like biometrics, religious beliefs, and medical, health and financial accounts, information needs to be necessary, for a specific purpose and protected. Prior to collection, there must be an impact assessment, and the individual should be informed of the collected data’s necessity and impact on personal rights.

Interestingly, the law seeks to prevent companies from using big data to prey on consumers – for example setting transaction prices – or mislead or defraud consumers based on individual characteristics or habits. Furthermore, large-scale network platforms must establish compliance systems, publicly self-report their efforts, and outsource data-protective measures.

And if data flows across borders, the data collectors must establish a specialized agency in China or appoint a representative to be responsible. Organizations are required to offer clarity on how data is protected and its security assessed.

Storing data overseas does not exempt a person or company from compliance to any of the Personal Information Protection Laws.

In the end, supervision and law enforcement falls to the Cyberspace Administration and relevant departments of the State Council.

[…]

Source: China puts continuous consent at the center of data protection law • The Register

It looks like China has had a good look at the EU Cybersecurity Act and enhanced on that. All this looks very good and of course even better that they mandate the Chinese governmental agencies to also follow this, but is it true? With all the governmental AI systems, cameras and facial recognition systems tracking ethnic minorities (such as the Uyghurs) and setting good behaviour scores, how will these be affected? Somehow I doubt they will dismantle the pervasive surveillance apparatus they have. So even if the laws sound excellent, the proof is in the pudding.

You Can Gain Admin Privileges to Any Windows Machine by Plugging in a Razer Mouse

[…]

When you plug in one of these Razer peripherals, Windows will automatically download Razer Synapse, the software that controls certain settings for your mouse or keyboard. Said Razer software has SYSTEM privileges, since it launches from a Windows process with SYSTEM privileges.

But that’s not where the vulnerability comes into play. Once you install the software, Windows’ setup wizard asks which folder you’d like to save it to. When you choose a new location for the folder, you’ll see a “Choose a Folder” prompt. Press Shift and right-click on that, and you can choose “Open PowerShell window here,” which will open a new PowerShell window.

Because this PowerShell window was launched from a process with SYSTEM privileges, the PowerShell window itself now has SYSTEM privileges. In effect, you’ve turned yourself into an admin on the machine, able to perform any command you can think of in the PowerShell window.

This vulnerability was first brought to light on Twitter by user jonhat, who tried contacting Razer about it first, to no avail. Razer did eventually follow up, confirming a patch is in the works. Until that patch is available, however, the company is inadvertently selling tools that make it easy to hack millions of computers.

[…]

Source: You Can Gain Admin Privileges to Any Windows Machine by Plugging in a Razer Mouse

Exclusive: Hacker Selling Private Data Allegedly from 70 Million AT&T Customers

A well-known threat actor with a long list of previous breaches is selling private data that was allegedly collected from 70 million AT&T customers. We analyzed the data and found it to include social security numbers, date of birth, and other private information. The hacker is asking $1 million for the entire database (direct sell) and has provided RestorePrivacy with exclusive information for this report.

Update: AT&T has initially denied the breach in a statement to RestorePrivacy. The hacker has responded by saying, “they will keep denying until I leak everything.”

Hot on the heels of a massive data breach with T Mobile earlier this week, AT&T now appears to be in the spotlight. A well-known threat actor in the underground hacking scene is claiming to have private data from 70 million AT&T customers. The threat actor goes by the name of ShinyHunters and was also behind other previous exploits that affected Microsoft, Tokopedia, Pixlr, Mashable, Minted, and more.

The hacker posted the leak on an underground hacking forum earlier today, along with a sample of the data that we analyzed. The original post is below:

AT&T Data Breach
This is the original post offering the data for sale on a hacking forum.

We examined the data for this report and also reached out to the hacker who posted it for sale.

70 million AT&T customers could be at risk

In the original post that we discovered on a hacker forum, the user posted a relatively small sample of the data. We examined the sample and it appears to be authentic based on available public records. Additionally, the user who posted it has a history of major data breaches and exploits, as we’ll examine more below.

While we cannot yet confirm the data is from AT&T customers, everything we examined appears to be valid. Here is the data that is available in this leak:

  • Name
  • Phone number
  • Physical address
  • Email address
  • Social security number
  • Date of birth

Below is a screenshot from the sample of data available:

ATT Data Breach
A selection of AT&T user data that is for sale.

In addition to the data above, the hacker also has accessed encrypted data from customers that include social security numbers and date of birth. Here is a sample that we examined:

70 million ATT users hacked

The data is currently being offered for $1 million USD for a direct sell (or flash sell) and $200,000 for access that is given to others. Assuming it is legit, this would be a very valuable breach as other threat actors can likely purchase and use the information for exploiting AT&T customers for financial gain.

Source: Exclusive: Hacker Selling Private Data Allegedly from 70 Million AT&T Customers | RestorePrivacy

Sensitive Data On Afghan Allies Collected By The US Military Is Now In The Hands Of The Taliban

The problem with harvesting reams of sensitive data is that it presents a very tempting target for malicious hackers, enemy governments, and other wrongdoers. That hasn’t prevented anyone from collecting and storing all of this data, secure only in the knowledge this security will ultimately be breached.

[…]

The Taliban is getting everything we left behind. It’s not just guns, gear, and aircraft. It’s the massive biometric collections we amassed while serving as armed ambassadors of goodwill. The stuff the US government compiled to track its allies are now handy repositories that will allow the Taliban to hunt down its enemies. Ken Klippenstein and Sara Sirota have more details for The Intercept.

The devices, known as HIIDE, for Handheld Interagency Identity Detection Equipment, were seized last week during the Taliban’s offensive, according to a Joint Special Operations Command official and three former U.S. military personnel, all of whom worried that sensitive data they contain could be used by the Taliban. HIIDE devices contain identifying biometric data such as iris scans and fingerprints, as well as biographical information, and are used to access large centralized databases. It’s unclear how much of the U.S. military’s biometric database on the Afghan population has been compromised.

At first, it might seem that this will only allow the Taliban to high-five each other for making the US government’s shit list. But it wasn’t just used to track terrorists. It was used to track allies.

While billed by the U.S. military as a means of tracking terrorists and other insurgents, biometric data on Afghans who assisted the U.S. was also widely collected and used in identification cards, sources said.

[…]

Source: Sensitive Data On Afghan Allies Collected By The US Military Is Now In The Hands Of The Taliban | Techdirt

Epic lawsuit’s latest claims: Google slipped tons of cash to game devs, Android makers to cement Play store dominance

Epic Games’ objections to Google’s business practices became clearer on Thursday with the release of previously redacted accusations in the gaming giant’s lawsuit against the internet goliath.

Those accusations included details of a Google-run operation dubbed Project Hug that aimed to sling hundreds of millions of dollars at developers to get them to remain within Google Play; and a so-called Premiere Device Program that gave device makers extra cash if they ensured users could only get their apps from the Play store, locking out third-party marketplaces and incentivizing manufacturers not to create their own software souks.

[…]

As part of the litigation, Epic made some accusations under seal last month [PDF] because Google’s attorneys designated the allegations confidential, based on Google’s habit of keeping business arrangements secret.

But on Wednesday, Judge James Donato issued an order disagreeing with Google’s rationale and directing the redacted material to be made public.

“Google did not demonstrate how the unredacted complaints might cause it commercial harm, and permitting sealing on the basis of a party’s internal practices would leave the fox guarding the hen house,” the judge wrote [PDF].

The unredacted details, highlighted in a separate redlined filing [PDF] and incorporated into an amended complaint filed on Friday [PDF], suggest Google has gone to great lengths to discourage competing app stores and to keep developers from making waves.

For example, the documents explain how Google employs revenue-sharing and licensing agreements with Android partners (OEMs) to maintain Google Play as the dominant app store. One filing describes “Anti-Fragmentation Agreements” that prevent partners from modifying the Android operating system to offer app downloads in a way that competes with Google Play.

“Google’s documents show that it pushes OEMs into making Google Play the exclusive app store on the OEMs’ devices through a series of coercive carrots and sticks, including by offering significant financial incentives to those that do so, and withholding those benefits from those that do not,” the redlined complaint says .

These agreements allegedly included the Premiere Device Program, launched in 2019, to give OEMs financial incentives like 4 per cent, or more, of Google Search revenues and 3-6 per cent of Google Play spending on their devices in return for ensuring Google exclusivity and the lack of apps with APK install rights.

[…]

Google’s highest level execs, it’s claimed, suggested giving Epic Games a deal “worth up to $208m (incremental cost to Google of $147m) over three years” to keep the game maker compliant. And if Epic did not accept, the court filing alleges, “a senior Google executive proposed that Google ‘consider approaching Tencent,’ a company that owns a minority stake in Epic, ‘to either (a) buy Epic shares from Tencent to get more control over Epic,’ or ‘(b) join up with Tencent to buy 100 per cent of Epic.'”

The filing contends that in 2019 Google’s internal estimate was that the company could lose between $1.1bn and $6bn by 2022 if Android app stores operated by Amazon and Samsung gain traction. The Epic Games Store, it’s said, could have cost Google $350m during that period.

[…]

Source: Epic lawsuit’s latest claims: Google slipped tons of cash to game devs, Android makers to cement Play store dominance • The Register

And this kind of nasty pressure is how monopolies strongarm their dominance

Court documents reveal that LG, Motorola, and HMD Global, which makes Nokia phones, are part of the Premier Device Program. Premier devices are effectively mandated to make Google’s services the “defaults for all key functions” for up to 90% of the manufacturer’s Android phones. This includes blocking apps with the ability to install APKs on the device, except for the app stores designed for and managed by the respective original equipment manufacturers (OEMs). In turn, Google promised a higher cut of search revenue earned on the device, raising the rate from 8% to 12%, which is not an insignificant increase. In some instances, Google also agreed to share up to 6% of the “Play spend” revenue from the Play Store, essentially how much money that phone made for Google based on the user’s interactions.

In addition to the other brands mentioned above, Xiaomi, Sony, Sharp, and BBK Electronics, which owns OnePlus, and overseas brands like Oppo and Vivo, were all involved in the program in varying capacities. Google even had contracts with carriers to dissuade them from launching app stores that would compete with Android’s app marketplace—explicitly demonstrating deep pockets prevent competition and innovation.

Source: Epic Court Documents Show How Google Pays Competitors to Not Compete – Gizmodo

Distributed Denial of Secrets – the new wikileaks

Distributed Denial of Secrets is a journalist 501(c)(3) non-profit devoted to enabling the free transmission of data in the public interest.

We aim to avoid political, corporate or personal leanings, to act as a beacon of available information. As a transparency collective, we don’t support any cause, idea or message beyond ensuring that information is available to those who need it most—the people.

You can read more about our collective, and our decision to embrace all sources of information. At its core, however, our mission is simple:

Veritatem cognoscere ruat cælum et pereat mundus

Source: Distributed Denial of Secrets

Online product displays can shape your buying behavior

[…]

items that come from the same category as the target product, such as a board game matched with other , enhance the chances of a target product’s purchase. In contrast, consumers are less likely to buy the target product if it is mismatched with products from different categories, for example, a board game displayed with kitchen knives.

The study utilized eye-tracking—a sensor technology that makes it possible to know where a person is looking—to examine how different types of displays influenced visual attention. Participants in the study looked at their target product for the same amount of time when it was paired with similar items or with items from different categories; however, shoppers spent more time looking at the mismatched products, even though they were only supposed to be there “for display.”

“What is surprising is that when I asked people how much they liked the target products, their preferences didn’t change between display settings,” Karmarkar said. “The findings show that it is not about how much you like or dislike the item you’re looking at, it’s about your process for buying the item. The surrounding display items don’t seem to change how much attention you give the target product, but they can influence your decision whether to buy it or not.”

Karmarkar, who holds Ph.D.s in and neuroscience, says the findings suggests that seeing similar options on the page reinforces the idea to consumers that they’re making the right kind of decision to purchase an item that fits the category on display.

[…]

Source: Online product displays can shape your buying behavior

Apple’s Not Digging Itself Out of This One: scanning your pictures is dangerous and flawed

Online researchers say they have found flaws in Apple’s new child abuse detection tool that could allow bad actors to target iOS users. However, Apple has denied these claims, arguing that it has intentionally built safeguards against such exploitation.

It’s just the latest bump in the road for the rollout of the company’s new features, which have been roundly criticized by privacy and civil liberties advocates since they were initially announced two weeks ago. Many critics view the updates—which are built to scour iPhones and other iOS products for signs of child sexual abuse material (CSAM)—as a slippery slope towards broader surveillance.

The most recent criticism centers around allegations that Apple’s “NeuralHash” technology—which scans for the bad images—can be exploited and tricked to potentially target users. This started because online researchers dug up and subsequently shared code for NeuralHash as a way to better understand it. One Github user, AsuharietYgvar, claims to have reverse-engineered the scanning tech’s algorithm and published the code to his page. Ygvar wrote in a Reddit post that the algorithm was basically available in iOS 14.3 as obfuscated code and that he had taken the code and rebuilt it in a Python script to assemble a clearer picture of how it worked.

Problematically, within a couple of hours, another researcher said they were able to use the posted code to trick the system into misidentifying an image, creating what is called a “hash collision.”

[…]

However, “hash collisions” involve a situation in which two totally different images produce the same “hash” or signature. In the context of Apple’s new tools, this has the potential to create a false-positive, potentially implicating an innocent person for having child porn, critics claim. The false-positive could be accidental or intentionally triggered by a malicious actor.

[…]

ost alarmingly, researchers noted that it could be easily co-opted by a government or other powerful entity, which might repurpose its surveillance tech to look for other kinds of content. “Our system could easily be repurposed for surveillance and censorship,” writes Mayer and his research partner, Anunay Kulshrestha, in an op-ed in the Washington Post. “The design wasn’t restricted to a specific category of content; a service could simply swap in any content-matching data base, and the person using that service would be none the wiser.”

The researchers were “so disturbed” by their findings that they subsequently declared the system dangerous, and warned that it shouldn’t be adopted by a company or organization until more research could be done to curtail the potential dangers it presented. However, not long afterward, Apple announced its plans to roll out a nearly identical system to over 1.5 billion devices in an effort to scan for CSAM. The op-ed ultimately notes that Apple is “gambling with security, privacy and free speech worldwide” by implementing a similar system in such a hasty, slapdash way.

[…]

pple’s decision to launch such an invasive technology so swiftly and unthinkingly is a major liability for consumers. The fact that Apple says it has built safety nets around this feature is not comforting at all, he added.

“You can always build safety nets underneath a broken system,” said Green, noting that it doesn’t ultimately fix the problem. “I have a lot of issues with this [new system]. I don’t think it’s something that we should be jumping into—this idea that local files on your device will be scanned.” Green further affirmed the idea that Apple had rushed this experimental system into production, comparing it to an untested airplane whose engines are held together via duct tape. “It’s like Apple has decided we’re all going to go on this airplane and we’re going to fly. Don’t worry [they say], the airplane has parachutes,” he said.

[…]

Source: Apple’s Not Digging Itself Out of This One

Your Credit Score Should Be Based On Your Web History, IMF Says

In a new blog post for the International Monetary Fund, four researchers presented their findings from a working paper that examines the current relationship between finance and tech as well as its potential future. Gazing into their crystal ball, the researchers see the possibility of using the data from your browsing, search, and purchase history to create a more accurate mechanism for determining the credit rating of an individual or business. They believe that this approach could result in greater lending to borrowers who would potentially be denied by traditional financial institutions. At its heart, the paper is trying to wrestle with the dawning notion that the institutional banking system is facing a serious threat from tech companies like Google, Facebook, and Apple. The researchers identify two key areas in which this is true: Tech companies have greater access to soft-information, and messaging platforms can take the place of the physical locations that banks rely on for meeting with customers.

The concept of using your web history to inform credit ratings is framed around the notion that lenders rely on hard-data that might obscure the worthiness of a borrower or paint an unnecessarily dire picture during hard times. Citing soft-data points like “the type of browser and hardware used to access the internet, the history of online searches and purchases” that could be incorporated into evaluating a borrower, the researchers believe that when a lender has a more intimate relationship with the potential client’s history, they might be more willing to cut them some slack. […] But how would all this data be incorporated into credit ratings? Machine learning, of course. It’s black boxes all the way down. The researchers acknowledge that there will be privacy and policy concerns related to incorporating this kind of soft-data into credit analysis. And they do little to explain how this might work in practice.

Source: Your Credit Score Should Be Based On Your Web History, IMF Says – Slashdot

So now the banks want your browsing history. They don’t want to miss out on the surveillance economy.

How to Stop Apple From Scanning Your iPhone Photos Before iOS 15 Arrives – disable photo backups. No alternative offered, sorry.

Photos that are sent in messaging apps like WhatsApp or Telegram aren’t scanned by Apple. Still, if you don’t want Apple to do this scanning at all, your only option is to disable iCloud Photos. To do that, open the “Settings” app on your iPhone or iPad, go to the “Photos” section, and disable the “iCloud Photos” feature. From the popup, choose the “Download Photos & Videos” option to download the photos from your iCloud Photos library.

Image for article titled How to Stop Apple From Scanning Your iPhone Photos Before iOS 15 Arrives
Screenshot: Khamosh Pathak

You can also use the iCloud website to download all photos to your computer. Your iPhone will now stop uploading new photos to iCloud, and Apple won’t scan any of your photos now.

Looking for an alternative? There really isn’t one. All major cloud-backup providers have the same scanning feature, it’s just that they do it completely in the cloud (while Apple uses a mix of on-device and cloud scanning). If you don’t want this kind of photo scanning, use local backups, NAS, or a backup service that is completely end-to-end encrypted.

Source: How to Stop Apple From Scanning Your iPhone Photos Before iOS 15 Arrives

OK, so you stole $600m-plus from us, how about you be our Chief Security Advisor, Poly Network asks thief

The mysterious thief who stole $600m-plus in cryptocurrencies from Poly Network has been offered the role of Chief Security Advisor at the Chinese blockchain biz.

It’s been a rollercoaster ride lately for Poly Network. The outfit builds software that handles the exchange of crypto-currencies and other assests between various blockchains. Last week, it confirmed a miscreant had drained hundreds of millions of dollars in digital tokens from its platform by exploiting a security weakness in its design.

After Poly Network urged netizens, cryptoexchanges, and miners to reject transactions involving the thief’s wallet addresses, the crook started giving the digital money back – and at least $260m of tokens have been returned. The company said it has maintained communication with the miscreant, who is referred to as Mr White Hat.

“It is important to reiterate that Poly Network has no intention of holding Mr White Hat legally responsible, as we are confident that Mr White Hat will promptly return full control of the assets to Poly Network and its users,” the organization said.

“While there were certain misunderstandings in the beginning due to poor communication channels, we now understand Mr White Hat’s vision for Defi and the crypto world, which is in line with Poly Network’s ambitions from the very beginning — to provide interoperability for ledgers in Web 3.0.”

First, Poly Network offered him $500,000 in Ethereum as a bug bounty award. He said he wasn’t going to accept the money, though the reward was transferred to his wallet anyway. Now, the company has gone one step further and has offered him the position of Chief Security Advisor.

“We are counting on more experts like Mr White Hat to be involved in the future development of Poly Network since we believe that we share the vision to build a secure and robust distributed system,” it said in a statement. “Also, to extend our thanks and encourage Mr White Hat to continue contributing to security advancement in the blockchain world together with Poly Network, we cordially invite Mr White Hat to be the Chief Security Advisor of Poly Network.”

It’s unclear whether so-called Mr White Hat will accept the job offer or not. Judging by the messages embedded in Ethereum transactions exchanged between both parties, it doesn’t look likely at the moment. He still hasn’t returned $238m, to the best of our knowledge, and said he isn’t ready to hand over the keys to the wallet where the funds are stored. He previously claimed he had attacked Poly Network for fun and to highlight the vulnerability in its programming.

“Dear Poly, glad to see that you are moving things to the right direction! Your essays are very convincing while your actions are showing your distrust, what a funny game…I am not ready to publish the key in this week…,” according to one message he sent.

Source: OK, so you stole $600m-plus from us, how about you be our Chief Security Advisor, Poly Network asks thief • The Register

Zoom to pay $85M for lying about encryption and sending data to Facebook and Google

Zoom has agreed to pay $85 million to settle claims that it lied about offering end-to-end encryption and gave user data to Facebook and Google without the consent of users. The settlement between Zoom and the filers of a class-action lawsuit also covers security problems that led to rampant “Zoombombings.”

The proposed settlement would generally give Zoom users $15 or $25 each and was filed Saturday at US District Court for the Northern District of California. It came nine months after Zoom agreed to security improvements and a “prohibition on privacy and security misrepresentations” in a settlement with the Federal Trade Commission, but the FTC settlement didn’t include compensation for users.

As we wrote in November, the FTC said that Zoom claimed it offers end-to-end encryption in its June 2016 and July 2017 HIPAA compliance guides, in a January 2019 white paper, in an April 2017 blog post, and in direct responses to inquiries from customers and potential customers. In reality, “Zoom did not provide end-to-end encryption for any Zoom Meeting that was conducted outside of Zoom’s ‘Connecter’ product (which are hosted on a customer’s own servers), because Zoom’s servers—including some located in China—maintain the cryptographic keys that would allow Zoom to access the content of its customers’ Zoom Meetings,” the FTC said. In real end-to-end encryption, only the users themselves have access to the keys needed to decrypt content.

[…]

Source: Zoom to pay $85M for lying about encryption and sending data to Facebook and Google | Ars Technica

>83 million Web Cams, Baby Monitor Feeds and other IoT devices using Kalay backend Exposed

a vulnerability is lurking in numerous types of smart devices—including security cameras, DVRs, and even baby monitors—that could allow an attacker to access live video and audio streams over the internet and even take full control of the gadgets remotely. What’s worse, it’s not limited to a single manufacturer; it shows up in a software development kit that permeates more than 83 million devices, and over a billion connections to the internet each month.

The SDK in question is ThroughTek Kalay, which provides a plug-and-play system for connecting smart devices with their corresponding mobile apps. The Kalay platform brokers the connection between a device and its app, handles authentication, and sends commands and data back and forth. For example, Kalay offers built-in functionality to coordinate between a security camera and an app that can remotely control the camera angle. Researchers from the security firm Mandiant discovered the critical bug at the end of 2020, and they are publicly disclosing it today in conjunction with the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency.

“You build Kalay in, and it’s the glue and functionality that these smart devices need,” says Jake Valletta, a director at Mandiant. “An attacker could connect to a device at will, retrieve audio and video, and use the remote API to then do things like trigger a firmware update, change the panning angle of a camera, or reboot the device. And the user doesn’t know that anything is wrong.”

The flaw is in the registration mechanism between devices and their mobile applications. The researchers found that this most basic connection hinges on each device’s “UID,” a unique Kalay identifier. An attacker who learns a device’s UID—which Valletta says could be obtained through a social engineering attack, or by searching for web vulnerabilities of a given manufacturer—and who has some knowledge of the Kalay protocol can reregister the UID and essentially hijack the connection the next time someone attempts to legitimately access the target device. The user will experience a few seconds of lag, but then everything proceeds normally from their perspective.

The attacker, though, can grab special credentials—typically a random, unique username and password—that each manufacturer sets for its devices. With the UID plus this login the attacker can then control the device remotely through Kalay without any other hacking or manipulation. Attackers can also potentially use full control of an embedded device like an IP camera as a jumping-off point to burrow deeper into a target’s network.

By exploiting the flaw, an attacker could watch video feeds in real time, potentially viewing sensitive security footage or peeking inside a baby’s crib. They could launch a denial of service attack against cameras or other gadgets by shutting them down. Or they could install malicious firmware on target devices. Additionally, since the attack works by grabbing credentials and then using Kalay as intended to remotely manage embedded devices, victims wouldn’t be able to oust intruders by wiping or resetting their equipment. Hackers could simply relaunch the attack.

“The affected ThroughTek P2P products may be vulnerable to improper access controls,” CISA wrote in its Tuesday advisory. “This vulnerability can allow an attacker to access sensitive information (such as camera feeds) or perform remote code execution. … CISA recommends users take defensive measures to minimize the risk of exploitation of this vulnerability.”

[…]

To defend against exploitation, devices need to be running Kalay version 3.1.10, originally released by ThroughTek in late 2018, or higher. But even the current Kalay SDK version (3.1.5) does not automatically fix the vulnerability. Instead, ThroughTek and Mandiant say that to plug the hole manufacturers must turn on two optional Kalay features: the encrypted communication protocol DTLS and the API authentication mechanism AuthKey.

[…]

“For the past three years, we have been informing our customers to upgrade their SDK,” ThroughTek’s Chen says. “Some old devices lack OTA [over the air update] function which makes the upgrade impossible. In addition, we have customers who don’t want to enable the DTLS because it would slow down the connection establishment speed, therefore are hesitant to upgrade.”

[…]

Source: Millions of Web Camera and Baby Monitor Feeds Are Exposed | WIRED

TCP Firewalls and middleboxes can be weaponized for gigantic DDoS attacks

Authored by computer scientists from the University of Maryland and the University of Colorado Boulder, the research is the first of its kind to describe a method to carry out DDoS reflective amplification attacks via the TCP protocol, previously thought to be unusable for such operations.

Making matters worse, researchers said the amplification factor for these TCP-based attacks is also far larger than UDP protocols, making TCP protocol abuse one of the most dangerous forms of carrying out a DDoS attack known to date and very likely to be abused in the future.

[…]

DDoS reflective amplification attack.”

This happens when an attacker sends network packets to a third-party server on the internet, the server processes and creates a much larger response packet, which it then sends to a victim instead of the attacker (thanks to a technique known as IP spoofing).

The technique effectively allows attackers to reflect/bounce and amplify traffic towards a victim via an intermediary point.

[…]

The flaw they found was in the design of middleboxes, which are equipment installed inside large organizations that inspect network traffic.

Middleboxes usually include the likes of firewalls, network address translators (NATs), load balancers, and deep packet inspection (DPI) systems.

The research team said they found that instead of trying to replicate the entire three-way handshake in a TCP connection, they could send a combination of non-standard packet sequences to the middlebox that would trick it into thinking the TCP handshake has finished and allow it to process the connection.

[…]

Under normal circumstances, this wouldn’t be an issue, but if the attacker tried to access a forbidden website, then the middlebox would respond with a “block page,” which would typically be much larger than the initial packet—hence an amplification effect.

Following extensive experiments that began last year, the research team said that the best TCP DDoS vectors appeared to be websites typically blocked by nation-state censorship systems or by enterprise policies.

Attackers would send a malformed sequence of TCP packets to a middlebox (firewall, DPI box, etc.) that tried to connect to pornography or gambling sites, and the middlebox would reply with an HTML block page that it would send to victims that wouldn’t even reside on their internal networks—thanks to IP spoofing.

[…]

Bock said the research team scanned the entire IPv4 internet address space 35 different times to discover and index middleboxes that would amplify TCP DDoS attacks.

In total, the team said they found 200 million IPv4 addresses corresponding to networking middleboxes that could be abused for attacks.

Most UDP protocols typically have an amplification factor of between 2 and 10, with very few protocols sometimes reaching 100 or more.

“We found hundreds of thousands of IP addresses that offer [TCP] amplification factors greater than 100×,” Bock and his team said, highlighting how a very large number of networking middleboxes could be abused for DDoS attacks far larger than the UDP protocols with the best amplification factors known to date.

Furthermore, the research team also found thousands of IP addresses that had amplification factors in the range of thousands and even up to 100,000,000, a number thought to be inconceivable for such attacks.

[…]

Bock told The Record they contacted several country-level Computer Emergency Readiness Teams (CERT) to coordinate the disclosure of their findings, including CERT teams in China, Egypt, India, Iran, Oman, Qatar, Russia, Saudi Arabia, South Korea, the United Arab Emirates, and the United States, where most censorship systems or middlebox vendors are based.

The team also notified companies in the DDoS mitigation field, which are most likely to see and have to deal with these attacks in the immediate future.

“We also reached out to several middlebox vendors and manufacturers, including Check Point, Cisco, F5, Fortinet, Juniper, Netscout, Palo Alto, SonicWall, and Sucuri,” the team said.

[…]

the research team also plans to release scripts and tools that network administrators can use to test their firewalls, DPI boxes, and other middleboxes and see if their devices are contributing to this problem. These tools will be available later today via this GitHub repository.

[…]

Additional technical details are available in a research paper titled “Weaponizing Middleboxes for TCP Reflected Amplification” [PDF]. The paper was presented today at the USENIX security conference, where it also received the Distinguished Paper Award.

Source: Firewalls and middleboxes can be weaponized for gigantic DDoS attacks – The Record by Recorded Future

The Humanity Globe: World Population Density per 30km^2

This visualization was created in **R** using the **rayrender** and **rayshader** packages to render the 3D image, and **ffmpeg** to combine the images into a video and add text. You can see close-ups of 6 continents in the following tweet thread:

https://twitter.com/tylermorganwall/status/1427642504082599942

The data source is the GPW-v4 population density dataset, at 15 minute (30km) increments:

Data:

https://sedac.ciesin.columbia.edu/data/collection/gpw-v4

Rayshader:

http://www.github.com/tylermorganwall/rayshader

Rayrender:

http://www.github.com/tylermorganwall/rayrender

Here’s a link to the R code used to generate the visualization:

https://gist.github.com/tylermorganwall/3ee1c6e2a5dff19aca7836c05cbbf9ac

Source: The Humanity Globe: World Population Density per 30km^2 [OC] – Reddit Swaglett

Posted in Art

Game Dev Turns Down $500k Exploitative Contract, explains why – looks like music industry contracts

Receiving a publishing deal from an indie publisher can be a turning point for an independent developer. But when one-man team Jakefriend was approached with an offer to invest half a million Canadian dollars into his hand-drawn action-adventure game Scrabdackle, he discovered the contract’s terms could see him signing himself into a lifetime of debt, losing all rights to his game, and even paying for it to be completed by others out of his own money.

In a lengthy thread on Twitter, indie developer Jakefriend explained the reasons he had turned down the half-million publishing deal for his Kickstarter-funded project, Scrabdackle. Already having raised CA$44,552 from crowdfunding, the investment could have seen his game released in multiple languages, with full QA testing, and launched simultaneously on PC and Switch. He just had to sign a contract including clauses that could leave him financially responsible for the game’s completion, while receiving no revenue at all, should he breach its terms.

“I turned down a pretty big publishing contract today for about half a million in total investment,” begins Jake’s thread. Without identifying the publisher, he continues, “They genuinely wanted to work with me, but couldn’t see what was exploitative about the terms. I’m not under an NDA, wanna talk about it?”

Over the following 24 tweets, the developer lays out the key issues with the contract, most especially focusing on the proposed revenue share. While the unnamed publisher would eventually offer a 50:50 split of revenues (albeit minus up to 10% for other sundry costs, including—very weirdly—international sales taxes), this wouldn’t happen until 50% of the marketing spend (approximately CA$200,000/US$159,000) and the entirety of his development funds (CA$65,000 Jake confirms to me via Discord) was recouped by sales. That works out to about 24,000 copies of the game, before which its developer would receive precisely 0% of revenue.

Even then, Scrabdackle’s lone developer explains, the contract made clear there would be no payments until a further 30 days after the end of the next quarter, with a further clause that allowed yet another three month delay beyond that. All this with no legal requirement to show him their financial records.

Should Jake want to challenge the sales data for the game, he’d be required to call for an audit, which he’d have to pay for whether there were issues or not. And should it turn out that there were discrepancies, there’d be no financial penalty for the publisher, merely the requirement to pay the missing amount—which he would have to hope would be enough to cover paying for the audit in the first place.

Another section of the contract explained that should there be disagreement about the direction of the game, the publisher could overrule and bring in a third-party developer to make the changes Jake would not, at Jake’s personal expense. With no spending limit on that figure.

But perhaps most surprising was a section declaring that should the developer be found in breach of the contract—something Jake explains is too ambiguously defined—then they would lose all rights to their game, receive no revenue from its sales, have to repay all the money they received, and pay for all further development costs to see the game completed. And here again there was no upper limit on what those costs could be.

It might seem obvious that no one should ever sign a contract containing clauses just so ridiculous. To be liable—at the publisher’s whim—for unlimited costs to complete a game while also required to pay back all funds (likely already spent), for no income from the game’s sales… Who would ever agree to such a thing? Well, as Jake tells me via Discord, an awful lot of independent developers, desperate for some financial support to finish their project. The contract described in his tweets might sound egregious, but the reality is that most of them offer some kind of awful term(s) for indie game devs.

“My close indie dev friends discuss what we’re able to of contracts frequently,” he says, “and the only thing surprising to them about mine is that it hit all the typical red flags instead of typically most of them. We’re all extremely fatigued and disheartened by how mundane an unjust contract offer is. It’s unfair and it’s tiring.”

Jake makes it clear that he doesn’t believe the people who contacted him were being maliciously predatory, but rather they were simply too used to the shitty terms. “I felt genuinely no sense of wanting to give me a bad deal with the scouts and producers I was speaking to, but I have to assume they are aware of the problems and are just used to that being the norm as well.”

Since posting the thread, Jake tells me he’s heard from a lot of other developers who described the terms to which he objected as, “sadly all-too-familiar.” At one point creator of The Witness, Jonathan Blow, replied to the thread saying, “I can guess who the publisher is because I have seen equivalent contracts.” Except Jake’s fairly certain he’d be wrong.

“The problem is so widespread,” Jake explains, “that when you describe the worst of terms, everyone thinks they know who it is and everyone has a different guess.

While putting this piece together, I reached out to boutique indie publisher Mike Rose of No More Robots, to see if he had seen anything similar, and indeed who he thought the publisher might be. “Honestly, it could be anyone,” he replied via Discord. “What [Jake] described is very much the norm. All of the big publishers you like, his description is all of their contracts.”

This is very much a point that Jake wants to make clear. In fact, it’s why he didn’t identify the publisher in his thread. Rather than to spare their blushes, or harm his future opportunities, Jake explains that he did it to ensure his experience couldn’t be taken advantage of by other indie publishers. “I don’t want to let others with equally bad practices off the hook,” he tells me. “As soon as I say ‘It was SoAndSo Publishing’, everyone else can say, ‘Wow, can’t believe it, glad we’re not like that,’ and have deniability.”

I also reached out to a few of the larger indie publishers, listing the main points of contention in Jake’s thread, to see if they had any comments. The only company that replied by the time of publication was Devolver. I was told,

“Publishing contracts have dozens of variables involved and a developer should rightfully decline points and clauses that make them feel uncomfortable or taken advantage of in what should be an equitable relationship with their partner—publisher, investor, or otherwise. Rev share and recoupment in particular should be weighed on factors like investment, risk, and opportunity for both parties and ultimately land on something where everyone feels like they are receiving a fair shake on what was put forth on the project. While I have not seen the full contract and context, most of the bullet points you placed here aren’t standard practice for our team.”

Where does this leave Jake and the future of Scrabdackle? “The Kickstarter funds only barely pay my costs for the next 10 months,” he tells Kotaku. “So there’s no Switch port or marketing budget to speak of. Nonetheless, I feel more motivated than ever going it alone.”

I asked if he would still consider a more reasonable publishing deal at this point. “This was a hobby project that only became something more when popular demand from an incredible and large community rallied for me to build a crowdfunding campaign…A publisher can offer a lot to an indie project, and a good deal is the difference between gamedev being a year-long stint or a long-term career for me, but that’s not worth the pound of flesh I was asked for.”

Source: Game Dev Turns Down Half Million Dollar Exploitative Contract

For the music industry:

Source: Courtney Love does the math

Source: How much do musicians really make from Spotify, iTunes and YouTube?

Source: How Musicians Make Money — Or Don’t at All — in 2018

Source: Kanye’s Contracts Reveal Dark Truths About the Music Industry

Source: Smiles and tears when “slave contract” controls the lives of K-Pop artists.

Source: Youtube’s support for musicians comes with a catch

How to Control Your Android With Just Your Facial Expressions

Android is implementing this option as part of the accessibility feature, Switch Access. Switch Access adds a blue selection window to your display, and lets you use external switches, a keyboard, or the buttons on your Android to move that selection window through the many different items on your screen until you land on the one you want to select.

The big update to Switch Access is to make facial gestures the triggers that move the selection window across your screen. This new feature is part of Android Accessibility Suite’s 12.0.0 beta, which arrives packed into the latest Android 12 beta (beta 4, to be exact). If you aren’t running the beta on your Android device, you won’t be able to take advantage of this cool new feature until Google seeds Android 12 to the general public.

If you want to try it out right now, however, you can simply enroll your device in the Android 12 beta program, then download and install the work-in-progress software to your phone. Follow along on our walkthrough here to set yourself up.

How to set up facial gestures on Android 12

To get started on a device running Android 12 beta 4, head over to Settings > Accessibility > Switch Access, then tap the toggle next to Use Switch Access. You’ll need to grant the feature full control over your device, which involves viewing and controlling the screen, as well as viewing and performing actions. Tap Allow to confirm.

The first time you do this, Android will automatically open the Switch Access setup guide. Here, tap Camera Switch, then tap Next. On the following page, choose between one switch or two switches, the latter of which Android recommends. With one switch, you use the same gesture to begin highlighting items on screen that you do to select a particular item. With two switches, you set one gesture to start highlighting, and a separate one to select.

Image for article titled How to Control Your Android With Just Your Facial Expressions
Screenshot: Jake Peterson

We’re going to demonstrate the instructions for choosing Two switches. On the following page, choose how you’d like Android to scan through a particular page of options:

  • Linear scanning (except keyboard): Move between items one at a time. If you’re using a keyboard, however, it will scan by row.
  • Row-column scanning: Scan one row at a time. After the row is selected, move through items in that list.
  • Group selection (advanced): All items will be assigned a color. You perform a face gesture corresponding to the color of the item you want to select. Narrow down the size of the group until you reach your choice.

We’ll choose Linear scanning for this walkthrough. Once you make your selection, choose Next, then choose a gesture to assign to the action Next (which is what tells the blue selection window to move through the screen). You can choose from Open Mouth, Smile, Raise Eyebrows, Look Left, Look Right, and Look Up, and can assign as many of these gestures as you want to the one action. Just know that when you assign a gesture to an action, you won’t be able to use it with another action. When finished, tap Next.

Image for article titled How to Control Your Android With Just Your Facial Expressions
Screenshot: Jake Peterson

Now, choose a gesture for the action Select (which selects an items that the blue selection window is hovering over). You can choose from the same list as before, barring any gestures you assigned to Next. Once you make your choice, you can actually start using these gestures to continue, since you can use your first gesture to move through the options, and your second gesture to select.

Finally, choose a gesture to pause or unpause camera switches. You don’t need to use this feature, but Android recommends you do. Pick your gesture or gestures, then choose Next. Once you do, the setup is done and you can now use your facial gestures to move around Android.

Other face gesture settings and options

Once you finish your setup, you’ll find some additional settings you can go through. Under Face Gesture Settings, you’ll find all the gesture options, as well as their assigned actions. Tap on one to test it out, adjust the gesture size, set the gesture duration, and edit the assignment for the gesture.

Image for article titled How to Control Your Android With Just Your Facial Expressions
Screenshot: Jake Peterson

Beneath Additional settings for Camera Switches, you’ll find four more options to choose from:

  • Enhanced visual feedback: Show a visual indication of how long you have held a gesture.
  • Enhanced audio feedback: Play a sound when something on the screen changes in response to a gesture.
  • Keep screen on: Keep the screen on when Camera Switches in enabled. Camera Switches cannot unlock the screen if it turns off.
  • Ignore repeated Camera Switch triggers: You can choose a duration of time where the system will interpret multiple Camera Switch triggers as one trigger.

How to turn off facial gestures (Camera Switches)

If you find that controlling your phone with facial gestures just isn’t for you, don’t worry; it’s easy to turn off the feature. Just head back to Settings > Accessibility > Switch Access, then choose Settings. Tap Camera Switch gestures, then tap the slider next to Use Camera Switches. That will disable the whole feature, while saving your setup. If you want to reenable the feature, just return to this page at any time, and tap the toggle again.

Source: How to Control Your Android With Just Your Facial Expressions

Stop using Zoom, Hamburg’s DPA warns state government – The US does not safeguard EU citizen data

Hamburg’s state government has been formally warned against using Zoom over data protection concerns.

The German state’s data protection agency (DPA) took the step of issuing a public warning yesterday, writing in a press release that the Senate Chancellory’s use of the popular videoconferencing tool violates the European Union’s General Data Protection Regulation (GDPR) since user data is transferred to the U.S. for processing.

The DPA’s concern follows a landmark ruling (Schrems II) by Europe’s top court last summer which invalidated a flagship data transfer arrangement between the EU and the U.S. (Privacy Shield), finding U.S. surveillance law to be incompatible with EU privacy rights.

The fallout from Schrems II has been slow to manifest — beyond an instant blanket of legal uncertainty. However, a number of European DPAs are now investigating the use of U.S.-based digital services because of the data transfer issue, in some instances publicly warning against the use of mainstream U.S. tools like Facebook and Zoom because user data cannot be adequately safeguarded when it’s taken over the pond.

German agencies are among the most proactive in this respect. But the EU’s data protection supervisor is also investigating the bloc’s use of cloud services from U.S. giants Amazon and Microsoft over the same data transfer concern.

[…]

The agency asserts that use of Zoom by the public body does not comply with the GDPR’s requirement for a valid legal basis for processing personal data, writing: “The documents submitted by the Senate Chancellery on the use of Zoom show that [GDPR] standards are not being adhered to.”

The DPA initiated a formal procedure earlier, via a hearing, on June 17, 2021, but says the Senate Chancellory failed to stop using the videoconferencing tool. Nor did it provide any additional documents or arguments to demonstrate compliance usage. Hence, the DPA taking the step of a formal warning, under Article 58 (2) (a) of the GDPR.

[…]

Source: Stop using Zoom, Hamburg’s DPA warns state government | TechCrunch

How to Limit Spotify From Tracking You, Because It Knows Too Much – and sells it

Most Spotify users are likely aware the streaming service tracks their listening activity, search history, playlists, and the songs they like or skip—that’s all part of helping the algorithm figure out what you like, right? However, some users may be less OK with how much other data Spotify and its partners are logging.

According to Spotify’s privacy policy, the company tracks:

  • Your name
  • Email address
  • Phone number
  • Date of birth
  • Gender
  • Street address, country, and other GPS location data
  • Login info
  • Billing info
  • Website cookies
  • IP address
  • Facebook user ID, login information, likes, and other data.
  • Device information like accelerometer or gyroscope data, operating system, model, browser, and even some data from other devices on your wifi network.

This information helps Spotify tailor song and artist recommendations to your tastes and is used to improve the in-app user experience, sure. However, the company also uses it to attract advertising partners, who can create personalized ads based on your information. And that doesn’t even touch on the third-party cross-site trackers that are eagerly eyeing your Spotify activity too.

Treating people and their data like a consumable resource is scummy, but it’s common practice for most companies and websites these days, and the common response from the general public is typically a shrug (never mind that a survey of US adults revealed we place a high value on our personal data). However, it’s still a security risk. As we’ve seen repeatedly over the years, all it takes is one poorly-secured server or an unusually skilled hacker to compromise the personal data that companies like Spotify hold onto.

And to top things off, almost all of your Spotify profile’s information is public by default—so anyone else with a Spotify account can easily look you up unless you go out of your way to change your settings.

Luckily, you can limit some of the data Spotify and connected third-party apps collect, and can review the personal information the app has stored. Spotify doesn’t offer that many data privacy options, and many of them are spread out across its web, desktop, and mobile apps, but we’ll show you where to find them all and which ones you should enable for the most private Spotify listening experience possible. You know, relatively.

How to change your Spotify account’s privacy settings

The web player is where to start if you want to tune up your Spotify privacy. Almost all of Spotify’s data privacy settings are found on there, rather than in the mobile or desktop apps.

We’ll start by cutting down on how much personal data you share with Spotify.

Image for article titled How to Limit Spotify From Tracking You, Because It Knows Too Much
Screenshot: Brendan Hesse
  1. Log in to Spotify’s web player on desktop.
  2. Click your user icon then go to Account > Edit profile.
  3. Remove or edit any personal info that you’re able to.
  4. Uncheck “Share my registration data with Spotify’s content providers for marketing purposes.”
  5. Click “Save Changes.”
Image for article titled How to Limit Spotify From Tracking You, Because It Knows Too Much
Screenshot: Brendan Hesse

Next, let’s limit how Spotify uses your personal data for advertising.

  1. Go to Account > Privacy settings.
  2. Turn off “Process my personal data for tailored ads.” Note that you’ll still get just as many ads—and Spotify will still track you—but your personal data will no longer be used to deliver you targeted ads.
  3. Turn off “Process my Facebook data. This will stop Spotify from using your Facebook account data to further refine the ads you hear.

Lastly, go to Account > Apps to review all the external apps linked to your Spotify account and see a list of all devices you’re logged in to. Remove any you don’t need or use anymore.

How to review your Spotify account data

You can also see how much of your personal data Spotify has collected. At the bottom of the Privacy Settings page, there’s an option to download your Spotify data for review. While you can’t remove this data from your account, it shows you a selection of personal information, your listening and search history, and other data the company has collected. Click “Request” to begin the process. Note that it can take up to 30 days for Spotify to get your data ready for download.

How to hide public playlists and listening activity on Spotify

Your Spotify playlists and listening activity are public by default, but you can quickly turn them off or even block certain listening activity in Spotify’s web and desktop apps. While this doesn’t affect Spotify’s data tracking, it’s still a good idea to keep some info hidden if you’re trying to make Spotify as private as possible.

How to turn off Spotify listening activity

Desktop

Image for article titled How to Limit Spotify From Tracking You, Because It Knows Too Much
Screenshot: Brendan Hesse
  1. Click your profile image and go to Settings > Social
  2. Turn off “Make my new playlists public.”
  3. Turn off “Share my listening activity on Spotify.”

Mobile

Image for article titled How to Limit Spotify From Tracking You, Because It Knows Too Much
Screenshot: Brendan Hesse
  1. Tap the settings icon in the upper-right of the app.
  2. Scroll down to “Social.”
  3. Disable “Listening Activity.”

How to hide Spotify Playlists

Don’t forget to hide previously created playlists, which are made public by default. This can be done from the desktop, web, and mobile apps.

Mobile

  1. Open the “Your Library” tab.
  2. Select a playlist.
  3. Tap the three-dot icon in the upper-right of the screen.
  4. Select “Make Secret.”

Desktop app and web player

  1. Open a playlist from the library bar on the left.
  2. Click the three-dot icon by the Playlist’s name.
  3. Select “Make Secret.”

How to use Private Listening mode on Spotify

Spotify’s Private Listening mode also hides your listening activity, but you need to enable it manually each time you want to use it.

Mobile

  1. In the app, go to Settings > Social.
  2. Tap “Enable private session.”

Desktop app and web player

There are three ways to enable a Private session on desktop:

  • Click your profile picture then select “Private session.”
  • Or, click the “…” icon in the upper-left and go to File > Private session.
  • Or, go to Settings > Social and toggle “Start a private session to listen anonymously.”

Note that Private sessions only affect what other users see (or don’t see, rather). It doesn’t stop Spotify from tracking your activity—though as Wired points out, Spotify’s Privacy Policy vaguely implies Private Mode “may not influence” your recommendations, so it’s possible some data isn’t tracked while this mode is turned on. It’s better to use the privacy controls outlined in the sections above if you want to change how Spotify collects data.

How to limit third-party cookie tracking in Spotify

Turning on the privacy settings above will help reduce how much data Spotify tracks and uses for advertising and keep some of your Spotify listening history hidden from other users, but you should also take steps to limit how other apps and websites track your Spotify activity.

Image for article titled How to Limit Spotify From Tracking You, Because It Knows Too Much
Screenshot: Brendan Hesse

The desktop app has built-in cookie blocking controls that can do this:

  1. In the desktop app, click your username in the top right corner.
  2. Go to Settings > Show advanced settings.
  3. Scroll down to “Privacy” and turn on “Block all cookies for this installation of the Spotify desktop app.”
  4. Close and restart the app for the change to take effect.

For iOS and iPad users, you can disable app tracking in your device’s settings. Android users have a similar option, though it’s not as aggressive. And for those listening on the Spotify web player, use browsers with strict privacy controls like Safari, Firefox, or Brave.

The last resort: Delete your Spotify account

Even with all possible privacy settings turned on and Private Listening sessions enabled at all times, Spotify is still tracking your data. If that is absolutely unacceptable to you, the only real option is to delete your account. This will remove all your Spotify data for good—just make sure you download and back up any data you want to import to other services before you go through with it.

  1. Go to the Contact Spotify Support web page and sign in with your Spotify account.
  2. Select the “Account” section.
  3. Click “I want to close my account” from the list of options.
  4. Scroll down to the bottom of the page and click “Close Account.”
  5. Follow the on-screen prompts, clicking “Continue” each time to move forward.
  6. After the final confirmation, Spotify will send you an email with the cancellation link. Click the “Close My Account” button to verify you want to delete your account (this link is only active for 24 hours).

To be clear, we’re not advocating everyone go out and delete their Spotify accounts over the company’s privacy policy and advertising practices, but it’s always important to know how—and why—the apps and websites we use are tracking us. As we said at the top, even companies with the best intentions can fumble your data, unwittingly delivering it into the wrong hands.

Even if you’re cool with Spotify tracking you and don’t feel like enabling the options we’ve outlined in this guide, take a moment to tune up your account’s privacy with a strong password and two-factor sign-in, and remove any unnecessary info from your profile. These extra steps will help keep you safe if there’s ever an unexpected security breach.

Source: How to Limit Spotify From Tracking You, Because It Knows Too Much

China orders annual security reviews for all critical information infrastructure operators

An announcement by the Cyberspace Administration of China (CAC) said that cyber attacks are currently frequent in the Middle Kingdom, and the security challenges facing critical information infrastructure are severe. The announcement therefore defines infosec regulations and and responsibilities.

The CAC referred to critical infrastructure as “the nerve center of economic and social operations and the top priority of network security”. China’s definition of critical information infrastructure can be found in Article 2 of the State Council’s “Regulations on the Security Protection of Critical Information Infrastructure” and boils down to any system that could suffer significant damage from a cyber attack, and/or have such an attack damage society at large or even national security.

“The regulations clarify that important network facilities and information systems in key industries and fields belong to critical information infrastructure,” wrote the CAC in its announcement (as translated from Mandarin), adding that the state was adopting measures to monitor, defend and handle network risks and intrusions, originating domestically and globally.

The regulations themselves are lengthy and detailed, but the theme is that all Chinese enterprises whose operations depend on networks must conduct an annual security reviews, report breaches to government, and establish teams to monitor security constantly.

Those teams get to develop emergency plans and carry out emergency drills on a regular basis, in accordance with disaster management national plans.

If an incident is ever discovered, reporting and escalation to national authorities is mandatory.

The lengthy document also details a variety of organizational and logistical “clarifications”, while also outlining the state’s ability to adjust identification rules dynamically, how safeguarding measures can be implemented, and legal responsibilities and penalties for negligent parties.

[…]

Source: China orders annual security reviews for all critical information infrastructure operators • The Register

This sounds sensible. The Dutch NCSC has guidelines and an audit checklist recommending this, however this is not mandatory anywhere and very few companies actually use the monster checklist, let alone implement it. Nowadays this is not really acceptable behaviour any more.