Your Credit Score Should Be Based On Your Web History, IMF Says

In a new blog post for the International Monetary Fund, four researchers presented their findings from a working paper that examines the current relationship between finance and tech as well as its potential future. Gazing into their crystal ball, the researchers see the possibility of using the data from your browsing, search, and purchase history to create a more accurate mechanism for determining the credit rating of an individual or business. They believe that this approach could result in greater lending to borrowers who would potentially be denied by traditional financial institutions. At its heart, the paper is trying to wrestle with the dawning notion that the institutional banking system is facing a serious threat from tech companies like Google, Facebook, and Apple. The researchers identify two key areas in which this is true: Tech companies have greater access to soft-information, and messaging platforms can take the place of the physical locations that banks rely on for meeting with customers.

The concept of using your web history to inform credit ratings is framed around the notion that lenders rely on hard-data that might obscure the worthiness of a borrower or paint an unnecessarily dire picture during hard times. Citing soft-data points like “the type of browser and hardware used to access the internet, the history of online searches and purchases” that could be incorporated into evaluating a borrower, the researchers believe that when a lender has a more intimate relationship with the potential client’s history, they might be more willing to cut them some slack. […] But how would all this data be incorporated into credit ratings? Machine learning, of course. It’s black boxes all the way down. The researchers acknowledge that there will be privacy and policy concerns related to incorporating this kind of soft-data into credit analysis. And they do little to explain how this might work in practice.

Source: Your Credit Score Should Be Based On Your Web History, IMF Says – Slashdot

So now the banks want your browsing history. They don’t want to miss out on the surveillance economy.

How to Stop Apple From Scanning Your iPhone Photos Before iOS 15 Arrives – disable photo backups. No alternative offered, sorry.

Photos that are sent in messaging apps like WhatsApp or Telegram aren’t scanned by Apple. Still, if you don’t want Apple to do this scanning at all, your only option is to disable iCloud Photos. To do that, open the “Settings” app on your iPhone or iPad, go to the “Photos” section, and disable the “iCloud Photos” feature. From the popup, choose the “Download Photos & Videos” option to download the photos from your iCloud Photos library.

Image for article titled How to Stop Apple From Scanning Your iPhone Photos Before iOS 15 Arrives
Screenshot: Khamosh Pathak

You can also use the iCloud website to download all photos to your computer. Your iPhone will now stop uploading new photos to iCloud, and Apple won’t scan any of your photos now.

Looking for an alternative? There really isn’t one. All major cloud-backup providers have the same scanning feature, it’s just that they do it completely in the cloud (while Apple uses a mix of on-device and cloud scanning). If you don’t want this kind of photo scanning, use local backups, NAS, or a backup service that is completely end-to-end encrypted.

Source: How to Stop Apple From Scanning Your iPhone Photos Before iOS 15 Arrives

OK, so you stole $600m-plus from us, how about you be our Chief Security Advisor, Poly Network asks thief

The mysterious thief who stole $600m-plus in cryptocurrencies from Poly Network has been offered the role of Chief Security Advisor at the Chinese blockchain biz.

It’s been a rollercoaster ride lately for Poly Network. The outfit builds software that handles the exchange of crypto-currencies and other assests between various blockchains. Last week, it confirmed a miscreant had drained hundreds of millions of dollars in digital tokens from its platform by exploiting a security weakness in its design.

After Poly Network urged netizens, cryptoexchanges, and miners to reject transactions involving the thief’s wallet addresses, the crook started giving the digital money back – and at least $260m of tokens have been returned. The company said it has maintained communication with the miscreant, who is referred to as Mr White Hat.

“It is important to reiterate that Poly Network has no intention of holding Mr White Hat legally responsible, as we are confident that Mr White Hat will promptly return full control of the assets to Poly Network and its users,” the organization said.

“While there were certain misunderstandings in the beginning due to poor communication channels, we now understand Mr White Hat’s vision for Defi and the crypto world, which is in line with Poly Network’s ambitions from the very beginning — to provide interoperability for ledgers in Web 3.0.”

First, Poly Network offered him $500,000 in Ethereum as a bug bounty award. He said he wasn’t going to accept the money, though the reward was transferred to his wallet anyway. Now, the company has gone one step further and has offered him the position of Chief Security Advisor.

“We are counting on more experts like Mr White Hat to be involved in the future development of Poly Network since we believe that we share the vision to build a secure and robust distributed system,” it said in a statement. “Also, to extend our thanks and encourage Mr White Hat to continue contributing to security advancement in the blockchain world together with Poly Network, we cordially invite Mr White Hat to be the Chief Security Advisor of Poly Network.”

It’s unclear whether so-called Mr White Hat will accept the job offer or not. Judging by the messages embedded in Ethereum transactions exchanged between both parties, it doesn’t look likely at the moment. He still hasn’t returned $238m, to the best of our knowledge, and said he isn’t ready to hand over the keys to the wallet where the funds are stored. He previously claimed he had attacked Poly Network for fun and to highlight the vulnerability in its programming.

“Dear Poly, glad to see that you are moving things to the right direction! Your essays are very convincing while your actions are showing your distrust, what a funny game…I am not ready to publish the key in this week…,” according to one message he sent.

Source: OK, so you stole $600m-plus from us, how about you be our Chief Security Advisor, Poly Network asks thief • The Register

Zoom to pay $85M for lying about encryption and sending data to Facebook and Google

Zoom has agreed to pay $85 million to settle claims that it lied about offering end-to-end encryption and gave user data to Facebook and Google without the consent of users. The settlement between Zoom and the filers of a class-action lawsuit also covers security problems that led to rampant “Zoombombings.”

The proposed settlement would generally give Zoom users $15 or $25 each and was filed Saturday at US District Court for the Northern District of California. It came nine months after Zoom agreed to security improvements and a “prohibition on privacy and security misrepresentations” in a settlement with the Federal Trade Commission, but the FTC settlement didn’t include compensation for users.

As we wrote in November, the FTC said that Zoom claimed it offers end-to-end encryption in its June 2016 and July 2017 HIPAA compliance guides, in a January 2019 white paper, in an April 2017 blog post, and in direct responses to inquiries from customers and potential customers. In reality, “Zoom did not provide end-to-end encryption for any Zoom Meeting that was conducted outside of Zoom’s ‘Connecter’ product (which are hosted on a customer’s own servers), because Zoom’s servers—including some located in China—maintain the cryptographic keys that would allow Zoom to access the content of its customers’ Zoom Meetings,” the FTC said. In real end-to-end encryption, only the users themselves have access to the keys needed to decrypt content.

[…]

Source: Zoom to pay $85M for lying about encryption and sending data to Facebook and Google | Ars Technica

>83 million Web Cams, Baby Monitor Feeds and other IoT devices using Kalay backend Exposed

a vulnerability is lurking in numerous types of smart devices—including security cameras, DVRs, and even baby monitors—that could allow an attacker to access live video and audio streams over the internet and even take full control of the gadgets remotely. What’s worse, it’s not limited to a single manufacturer; it shows up in a software development kit that permeates more than 83 million devices, and over a billion connections to the internet each month.

The SDK in question is ThroughTek Kalay, which provides a plug-and-play system for connecting smart devices with their corresponding mobile apps. The Kalay platform brokers the connection between a device and its app, handles authentication, and sends commands and data back and forth. For example, Kalay offers built-in functionality to coordinate between a security camera and an app that can remotely control the camera angle. Researchers from the security firm Mandiant discovered the critical bug at the end of 2020, and they are publicly disclosing it today in conjunction with the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency.

“You build Kalay in, and it’s the glue and functionality that these smart devices need,” says Jake Valletta, a director at Mandiant. “An attacker could connect to a device at will, retrieve audio and video, and use the remote API to then do things like trigger a firmware update, change the panning angle of a camera, or reboot the device. And the user doesn’t know that anything is wrong.”

The flaw is in the registration mechanism between devices and their mobile applications. The researchers found that this most basic connection hinges on each device’s “UID,” a unique Kalay identifier. An attacker who learns a device’s UID—which Valletta says could be obtained through a social engineering attack, or by searching for web vulnerabilities of a given manufacturer—and who has some knowledge of the Kalay protocol can reregister the UID and essentially hijack the connection the next time someone attempts to legitimately access the target device. The user will experience a few seconds of lag, but then everything proceeds normally from their perspective.

The attacker, though, can grab special credentials—typically a random, unique username and password—that each manufacturer sets for its devices. With the UID plus this login the attacker can then control the device remotely through Kalay without any other hacking or manipulation. Attackers can also potentially use full control of an embedded device like an IP camera as a jumping-off point to burrow deeper into a target’s network.

By exploiting the flaw, an attacker could watch video feeds in real time, potentially viewing sensitive security footage or peeking inside a baby’s crib. They could launch a denial of service attack against cameras or other gadgets by shutting them down. Or they could install malicious firmware on target devices. Additionally, since the attack works by grabbing credentials and then using Kalay as intended to remotely manage embedded devices, victims wouldn’t be able to oust intruders by wiping or resetting their equipment. Hackers could simply relaunch the attack.

“The affected ThroughTek P2P products may be vulnerable to improper access controls,” CISA wrote in its Tuesday advisory. “This vulnerability can allow an attacker to access sensitive information (such as camera feeds) or perform remote code execution. … CISA recommends users take defensive measures to minimize the risk of exploitation of this vulnerability.”

[…]

To defend against exploitation, devices need to be running Kalay version 3.1.10, originally released by ThroughTek in late 2018, or higher. But even the current Kalay SDK version (3.1.5) does not automatically fix the vulnerability. Instead, ThroughTek and Mandiant say that to plug the hole manufacturers must turn on two optional Kalay features: the encrypted communication protocol DTLS and the API authentication mechanism AuthKey.

[…]

“For the past three years, we have been informing our customers to upgrade their SDK,” ThroughTek’s Chen says. “Some old devices lack OTA [over the air update] function which makes the upgrade impossible. In addition, we have customers who don’t want to enable the DTLS because it would slow down the connection establishment speed, therefore are hesitant to upgrade.”

[…]

Source: Millions of Web Camera and Baby Monitor Feeds Are Exposed | WIRED

TCP Firewalls and middleboxes can be weaponized for gigantic DDoS attacks

Authored by computer scientists from the University of Maryland and the University of Colorado Boulder, the research is the first of its kind to describe a method to carry out DDoS reflective amplification attacks via the TCP protocol, previously thought to be unusable for such operations.

Making matters worse, researchers said the amplification factor for these TCP-based attacks is also far larger than UDP protocols, making TCP protocol abuse one of the most dangerous forms of carrying out a DDoS attack known to date and very likely to be abused in the future.

[…]

DDoS reflective amplification attack.”

This happens when an attacker sends network packets to a third-party server on the internet, the server processes and creates a much larger response packet, which it then sends to a victim instead of the attacker (thanks to a technique known as IP spoofing).

The technique effectively allows attackers to reflect/bounce and amplify traffic towards a victim via an intermediary point.

[…]

The flaw they found was in the design of middleboxes, which are equipment installed inside large organizations that inspect network traffic.

Middleboxes usually include the likes of firewalls, network address translators (NATs), load balancers, and deep packet inspection (DPI) systems.

The research team said they found that instead of trying to replicate the entire three-way handshake in a TCP connection, they could send a combination of non-standard packet sequences to the middlebox that would trick it into thinking the TCP handshake has finished and allow it to process the connection.

[…]

Under normal circumstances, this wouldn’t be an issue, but if the attacker tried to access a forbidden website, then the middlebox would respond with a “block page,” which would typically be much larger than the initial packet—hence an amplification effect.

Following extensive experiments that began last year, the research team said that the best TCP DDoS vectors appeared to be websites typically blocked by nation-state censorship systems or by enterprise policies.

Attackers would send a malformed sequence of TCP packets to a middlebox (firewall, DPI box, etc.) that tried to connect to pornography or gambling sites, and the middlebox would reply with an HTML block page that it would send to victims that wouldn’t even reside on their internal networks—thanks to IP spoofing.

[…]

Bock said the research team scanned the entire IPv4 internet address space 35 different times to discover and index middleboxes that would amplify TCP DDoS attacks.

In total, the team said they found 200 million IPv4 addresses corresponding to networking middleboxes that could be abused for attacks.

Most UDP protocols typically have an amplification factor of between 2 and 10, with very few protocols sometimes reaching 100 or more.

“We found hundreds of thousands of IP addresses that offer [TCP] amplification factors greater than 100×,” Bock and his team said, highlighting how a very large number of networking middleboxes could be abused for DDoS attacks far larger than the UDP protocols with the best amplification factors known to date.

Furthermore, the research team also found thousands of IP addresses that had amplification factors in the range of thousands and even up to 100,000,000, a number thought to be inconceivable for such attacks.

[…]

Bock told The Record they contacted several country-level Computer Emergency Readiness Teams (CERT) to coordinate the disclosure of their findings, including CERT teams in China, Egypt, India, Iran, Oman, Qatar, Russia, Saudi Arabia, South Korea, the United Arab Emirates, and the United States, where most censorship systems or middlebox vendors are based.

The team also notified companies in the DDoS mitigation field, which are most likely to see and have to deal with these attacks in the immediate future.

“We also reached out to several middlebox vendors and manufacturers, including Check Point, Cisco, F5, Fortinet, Juniper, Netscout, Palo Alto, SonicWall, and Sucuri,” the team said.

[…]

the research team also plans to release scripts and tools that network administrators can use to test their firewalls, DPI boxes, and other middleboxes and see if their devices are contributing to this problem. These tools will be available later today via this GitHub repository.

[…]

Additional technical details are available in a research paper titled “Weaponizing Middleboxes for TCP Reflected Amplification” [PDF]. The paper was presented today at the USENIX security conference, where it also received the Distinguished Paper Award.

Source: Firewalls and middleboxes can be weaponized for gigantic DDoS attacks – The Record by Recorded Future

The Humanity Globe: World Population Density per 30km^2

This visualization was created in **R** using the **rayrender** and **rayshader** packages to render the 3D image, and **ffmpeg** to combine the images into a video and add text. You can see close-ups of 6 continents in the following tweet thread:

https://twitter.com/tylermorganwall/status/1427642504082599942

The data source is the GPW-v4 population density dataset, at 15 minute (30km) increments:

Data:

https://sedac.ciesin.columbia.edu/data/collection/gpw-v4

Rayshader:

http://www.github.com/tylermorganwall/rayshader

Rayrender:

http://www.github.com/tylermorganwall/rayrender

Here’s a link to the R code used to generate the visualization:

https://gist.github.com/tylermorganwall/3ee1c6e2a5dff19aca7836c05cbbf9ac

Source: The Humanity Globe: World Population Density per 30km^2 [OC] – Reddit Swaglett

Posted in Art

Game Dev Turns Down $500k Exploitative Contract, explains why – looks like music industry contracts

Receiving a publishing deal from an indie publisher can be a turning point for an independent developer. But when one-man team Jakefriend was approached with an offer to invest half a million Canadian dollars into his hand-drawn action-adventure game Scrabdackle, he discovered the contract’s terms could see him signing himself into a lifetime of debt, losing all rights to his game, and even paying for it to be completed by others out of his own money.

In a lengthy thread on Twitter, indie developer Jakefriend explained the reasons he had turned down the half-million publishing deal for his Kickstarter-funded project, Scrabdackle. Already having raised CA$44,552 from crowdfunding, the investment could have seen his game released in multiple languages, with full QA testing, and launched simultaneously on PC and Switch. He just had to sign a contract including clauses that could leave him financially responsible for the game’s completion, while receiving no revenue at all, should he breach its terms.

“I turned down a pretty big publishing contract today for about half a million in total investment,” begins Jake’s thread. Without identifying the publisher, he continues, “They genuinely wanted to work with me, but couldn’t see what was exploitative about the terms. I’m not under an NDA, wanna talk about it?”

Over the following 24 tweets, the developer lays out the key issues with the contract, most especially focusing on the proposed revenue share. While the unnamed publisher would eventually offer a 50:50 split of revenues (albeit minus up to 10% for other sundry costs, including—very weirdly—international sales taxes), this wouldn’t happen until 50% of the marketing spend (approximately CA$200,000/US$159,000) and the entirety of his development funds (CA$65,000 Jake confirms to me via Discord) was recouped by sales. That works out to about 24,000 copies of the game, before which its developer would receive precisely 0% of revenue.

Even then, Scrabdackle’s lone developer explains, the contract made clear there would be no payments until a further 30 days after the end of the next quarter, with a further clause that allowed yet another three month delay beyond that. All this with no legal requirement to show him their financial records.

Should Jake want to challenge the sales data for the game, he’d be required to call for an audit, which he’d have to pay for whether there were issues or not. And should it turn out that there were discrepancies, there’d be no financial penalty for the publisher, merely the requirement to pay the missing amount—which he would have to hope would be enough to cover paying for the audit in the first place.

Another section of the contract explained that should there be disagreement about the direction of the game, the publisher could overrule and bring in a third-party developer to make the changes Jake would not, at Jake’s personal expense. With no spending limit on that figure.

But perhaps most surprising was a section declaring that should the developer be found in breach of the contract—something Jake explains is too ambiguously defined—then they would lose all rights to their game, receive no revenue from its sales, have to repay all the money they received, and pay for all further development costs to see the game completed. And here again there was no upper limit on what those costs could be.

It might seem obvious that no one should ever sign a contract containing clauses just so ridiculous. To be liable—at the publisher’s whim—for unlimited costs to complete a game while also required to pay back all funds (likely already spent), for no income from the game’s sales… Who would ever agree to such a thing? Well, as Jake tells me via Discord, an awful lot of independent developers, desperate for some financial support to finish their project. The contract described in his tweets might sound egregious, but the reality is that most of them offer some kind of awful term(s) for indie game devs.

“My close indie dev friends discuss what we’re able to of contracts frequently,” he says, “and the only thing surprising to them about mine is that it hit all the typical red flags instead of typically most of them. We’re all extremely fatigued and disheartened by how mundane an unjust contract offer is. It’s unfair and it’s tiring.”

Jake makes it clear that he doesn’t believe the people who contacted him were being maliciously predatory, but rather they were simply too used to the shitty terms. “I felt genuinely no sense of wanting to give me a bad deal with the scouts and producers I was speaking to, but I have to assume they are aware of the problems and are just used to that being the norm as well.”

Since posting the thread, Jake tells me he’s heard from a lot of other developers who described the terms to which he objected as, “sadly all-too-familiar.” At one point creator of The Witness, Jonathan Blow, replied to the thread saying, “I can guess who the publisher is because I have seen equivalent contracts.” Except Jake’s fairly certain he’d be wrong.

“The problem is so widespread,” Jake explains, “that when you describe the worst of terms, everyone thinks they know who it is and everyone has a different guess.

While putting this piece together, I reached out to boutique indie publisher Mike Rose of No More Robots, to see if he had seen anything similar, and indeed who he thought the publisher might be. “Honestly, it could be anyone,” he replied via Discord. “What [Jake] described is very much the norm. All of the big publishers you like, his description is all of their contracts.”

This is very much a point that Jake wants to make clear. In fact, it’s why he didn’t identify the publisher in his thread. Rather than to spare their blushes, or harm his future opportunities, Jake explains that he did it to ensure his experience couldn’t be taken advantage of by other indie publishers. “I don’t want to let others with equally bad practices off the hook,” he tells me. “As soon as I say ‘It was SoAndSo Publishing’, everyone else can say, ‘Wow, can’t believe it, glad we’re not like that,’ and have deniability.”

I also reached out to a few of the larger indie publishers, listing the main points of contention in Jake’s thread, to see if they had any comments. The only company that replied by the time of publication was Devolver. I was told,

“Publishing contracts have dozens of variables involved and a developer should rightfully decline points and clauses that make them feel uncomfortable or taken advantage of in what should be an equitable relationship with their partner—publisher, investor, or otherwise. Rev share and recoupment in particular should be weighed on factors like investment, risk, and opportunity for both parties and ultimately land on something where everyone feels like they are receiving a fair shake on what was put forth on the project. While I have not seen the full contract and context, most of the bullet points you placed here aren’t standard practice for our team.”

Where does this leave Jake and the future of Scrabdackle? “The Kickstarter funds only barely pay my costs for the next 10 months,” he tells Kotaku. “So there’s no Switch port or marketing budget to speak of. Nonetheless, I feel more motivated than ever going it alone.”

I asked if he would still consider a more reasonable publishing deal at this point. “This was a hobby project that only became something more when popular demand from an incredible and large community rallied for me to build a crowdfunding campaign…A publisher can offer a lot to an indie project, and a good deal is the difference between gamedev being a year-long stint or a long-term career for me, but that’s not worth the pound of flesh I was asked for.”

Source: Game Dev Turns Down Half Million Dollar Exploitative Contract

For the music industry:

Source: Courtney Love does the math

Source: How much do musicians really make from Spotify, iTunes and YouTube?

Source: How Musicians Make Money — Or Don’t at All — in 2018

Source: Kanye’s Contracts Reveal Dark Truths About the Music Industry

Source: Smiles and tears when “slave contract” controls the lives of K-Pop artists.

Source: Youtube’s support for musicians comes with a catch

How to Control Your Android With Just Your Facial Expressions

Android is implementing this option as part of the accessibility feature, Switch Access. Switch Access adds a blue selection window to your display, and lets you use external switches, a keyboard, or the buttons on your Android to move that selection window through the many different items on your screen until you land on the one you want to select.

The big update to Switch Access is to make facial gestures the triggers that move the selection window across your screen. This new feature is part of Android Accessibility Suite’s 12.0.0 beta, which arrives packed into the latest Android 12 beta (beta 4, to be exact). If you aren’t running the beta on your Android device, you won’t be able to take advantage of this cool new feature until Google seeds Android 12 to the general public.

If you want to try it out right now, however, you can simply enroll your device in the Android 12 beta program, then download and install the work-in-progress software to your phone. Follow along on our walkthrough here to set yourself up.

How to set up facial gestures on Android 12

To get started on a device running Android 12 beta 4, head over to Settings > Accessibility > Switch Access, then tap the toggle next to Use Switch Access. You’ll need to grant the feature full control over your device, which involves viewing and controlling the screen, as well as viewing and performing actions. Tap Allow to confirm.

The first time you do this, Android will automatically open the Switch Access setup guide. Here, tap Camera Switch, then tap Next. On the following page, choose between one switch or two switches, the latter of which Android recommends. With one switch, you use the same gesture to begin highlighting items on screen that you do to select a particular item. With two switches, you set one gesture to start highlighting, and a separate one to select.

Image for article titled How to Control Your Android With Just Your Facial Expressions
Screenshot: Jake Peterson

We’re going to demonstrate the instructions for choosing Two switches. On the following page, choose how you’d like Android to scan through a particular page of options:

  • Linear scanning (except keyboard): Move between items one at a time. If you’re using a keyboard, however, it will scan by row.
  • Row-column scanning: Scan one row at a time. After the row is selected, move through items in that list.
  • Group selection (advanced): All items will be assigned a color. You perform a face gesture corresponding to the color of the item you want to select. Narrow down the size of the group until you reach your choice.

We’ll choose Linear scanning for this walkthrough. Once you make your selection, choose Next, then choose a gesture to assign to the action Next (which is what tells the blue selection window to move through the screen). You can choose from Open Mouth, Smile, Raise Eyebrows, Look Left, Look Right, and Look Up, and can assign as many of these gestures as you want to the one action. Just know that when you assign a gesture to an action, you won’t be able to use it with another action. When finished, tap Next.

Image for article titled How to Control Your Android With Just Your Facial Expressions
Screenshot: Jake Peterson

Now, choose a gesture for the action Select (which selects an items that the blue selection window is hovering over). You can choose from the same list as before, barring any gestures you assigned to Next. Once you make your choice, you can actually start using these gestures to continue, since you can use your first gesture to move through the options, and your second gesture to select.

Finally, choose a gesture to pause or unpause camera switches. You don’t need to use this feature, but Android recommends you do. Pick your gesture or gestures, then choose Next. Once you do, the setup is done and you can now use your facial gestures to move around Android.

Other face gesture settings and options

Once you finish your setup, you’ll find some additional settings you can go through. Under Face Gesture Settings, you’ll find all the gesture options, as well as their assigned actions. Tap on one to test it out, adjust the gesture size, set the gesture duration, and edit the assignment for the gesture.

Image for article titled How to Control Your Android With Just Your Facial Expressions
Screenshot: Jake Peterson

Beneath Additional settings for Camera Switches, you’ll find four more options to choose from:

  • Enhanced visual feedback: Show a visual indication of how long you have held a gesture.
  • Enhanced audio feedback: Play a sound when something on the screen changes in response to a gesture.
  • Keep screen on: Keep the screen on when Camera Switches in enabled. Camera Switches cannot unlock the screen if it turns off.
  • Ignore repeated Camera Switch triggers: You can choose a duration of time where the system will interpret multiple Camera Switch triggers as one trigger.

How to turn off facial gestures (Camera Switches)

If you find that controlling your phone with facial gestures just isn’t for you, don’t worry; it’s easy to turn off the feature. Just head back to Settings > Accessibility > Switch Access, then choose Settings. Tap Camera Switch gestures, then tap the slider next to Use Camera Switches. That will disable the whole feature, while saving your setup. If you want to reenable the feature, just return to this page at any time, and tap the toggle again.

Source: How to Control Your Android With Just Your Facial Expressions

Stop using Zoom, Hamburg’s DPA warns state government – The US does not safeguard EU citizen data

Hamburg’s state government has been formally warned against using Zoom over data protection concerns.

The German state’s data protection agency (DPA) took the step of issuing a public warning yesterday, writing in a press release that the Senate Chancellory’s use of the popular videoconferencing tool violates the European Union’s General Data Protection Regulation (GDPR) since user data is transferred to the U.S. for processing.

The DPA’s concern follows a landmark ruling (Schrems II) by Europe’s top court last summer which invalidated a flagship data transfer arrangement between the EU and the U.S. (Privacy Shield), finding U.S. surveillance law to be incompatible with EU privacy rights.

The fallout from Schrems II has been slow to manifest — beyond an instant blanket of legal uncertainty. However, a number of European DPAs are now investigating the use of U.S.-based digital services because of the data transfer issue, in some instances publicly warning against the use of mainstream U.S. tools like Facebook and Zoom because user data cannot be adequately safeguarded when it’s taken over the pond.

German agencies are among the most proactive in this respect. But the EU’s data protection supervisor is also investigating the bloc’s use of cloud services from U.S. giants Amazon and Microsoft over the same data transfer concern.

[…]

The agency asserts that use of Zoom by the public body does not comply with the GDPR’s requirement for a valid legal basis for processing personal data, writing: “The documents submitted by the Senate Chancellery on the use of Zoom show that [GDPR] standards are not being adhered to.”

The DPA initiated a formal procedure earlier, via a hearing, on June 17, 2021, but says the Senate Chancellory failed to stop using the videoconferencing tool. Nor did it provide any additional documents or arguments to demonstrate compliance usage. Hence, the DPA taking the step of a formal warning, under Article 58 (2) (a) of the GDPR.

[…]

Source: Stop using Zoom, Hamburg’s DPA warns state government | TechCrunch

How to Limit Spotify From Tracking You, Because It Knows Too Much – and sells it

Most Spotify users are likely aware the streaming service tracks their listening activity, search history, playlists, and the songs they like or skip—that’s all part of helping the algorithm figure out what you like, right? However, some users may be less OK with how much other data Spotify and its partners are logging.

According to Spotify’s privacy policy, the company tracks:

  • Your name
  • Email address
  • Phone number
  • Date of birth
  • Gender
  • Street address, country, and other GPS location data
  • Login info
  • Billing info
  • Website cookies
  • IP address
  • Facebook user ID, login information, likes, and other data.
  • Device information like accelerometer or gyroscope data, operating system, model, browser, and even some data from other devices on your wifi network.

This information helps Spotify tailor song and artist recommendations to your tastes and is used to improve the in-app user experience, sure. However, the company also uses it to attract advertising partners, who can create personalized ads based on your information. And that doesn’t even touch on the third-party cross-site trackers that are eagerly eyeing your Spotify activity too.

Treating people and their data like a consumable resource is scummy, but it’s common practice for most companies and websites these days, and the common response from the general public is typically a shrug (never mind that a survey of US adults revealed we place a high value on our personal data). However, it’s still a security risk. As we’ve seen repeatedly over the years, all it takes is one poorly-secured server or an unusually skilled hacker to compromise the personal data that companies like Spotify hold onto.

And to top things off, almost all of your Spotify profile’s information is public by default—so anyone else with a Spotify account can easily look you up unless you go out of your way to change your settings.

Luckily, you can limit some of the data Spotify and connected third-party apps collect, and can review the personal information the app has stored. Spotify doesn’t offer that many data privacy options, and many of them are spread out across its web, desktop, and mobile apps, but we’ll show you where to find them all and which ones you should enable for the most private Spotify listening experience possible. You know, relatively.

How to change your Spotify account’s privacy settings

The web player is where to start if you want to tune up your Spotify privacy. Almost all of Spotify’s data privacy settings are found on there, rather than in the mobile or desktop apps.

We’ll start by cutting down on how much personal data you share with Spotify.

Image for article titled How to Limit Spotify From Tracking You, Because It Knows Too Much
Screenshot: Brendan Hesse
  1. Log in to Spotify’s web player on desktop.
  2. Click your user icon then go to Account > Edit profile.
  3. Remove or edit any personal info that you’re able to.
  4. Uncheck “Share my registration data with Spotify’s content providers for marketing purposes.”
  5. Click “Save Changes.”
Image for article titled How to Limit Spotify From Tracking You, Because It Knows Too Much
Screenshot: Brendan Hesse

Next, let’s limit how Spotify uses your personal data for advertising.

  1. Go to Account > Privacy settings.
  2. Turn off “Process my personal data for tailored ads.” Note that you’ll still get just as many ads—and Spotify will still track you—but your personal data will no longer be used to deliver you targeted ads.
  3. Turn off “Process my Facebook data. This will stop Spotify from using your Facebook account data to further refine the ads you hear.

Lastly, go to Account > Apps to review all the external apps linked to your Spotify account and see a list of all devices you’re logged in to. Remove any you don’t need or use anymore.

How to review your Spotify account data

You can also see how much of your personal data Spotify has collected. At the bottom of the Privacy Settings page, there’s an option to download your Spotify data for review. While you can’t remove this data from your account, it shows you a selection of personal information, your listening and search history, and other data the company has collected. Click “Request” to begin the process. Note that it can take up to 30 days for Spotify to get your data ready for download.

How to hide public playlists and listening activity on Spotify

Your Spotify playlists and listening activity are public by default, but you can quickly turn them off or even block certain listening activity in Spotify’s web and desktop apps. While this doesn’t affect Spotify’s data tracking, it’s still a good idea to keep some info hidden if you’re trying to make Spotify as private as possible.

How to turn off Spotify listening activity

Desktop

Image for article titled How to Limit Spotify From Tracking You, Because It Knows Too Much
Screenshot: Brendan Hesse
  1. Click your profile image and go to Settings > Social
  2. Turn off “Make my new playlists public.”
  3. Turn off “Share my listening activity on Spotify.”

Mobile

Image for article titled How to Limit Spotify From Tracking You, Because It Knows Too Much
Screenshot: Brendan Hesse
  1. Tap the settings icon in the upper-right of the app.
  2. Scroll down to “Social.”
  3. Disable “Listening Activity.”

How to hide Spotify Playlists

Don’t forget to hide previously created playlists, which are made public by default. This can be done from the desktop, web, and mobile apps.

Mobile

  1. Open the “Your Library” tab.
  2. Select a playlist.
  3. Tap the three-dot icon in the upper-right of the screen.
  4. Select “Make Secret.”

Desktop app and web player

  1. Open a playlist from the library bar on the left.
  2. Click the three-dot icon by the Playlist’s name.
  3. Select “Make Secret.”

How to use Private Listening mode on Spotify

Spotify’s Private Listening mode also hides your listening activity, but you need to enable it manually each time you want to use it.

Mobile

  1. In the app, go to Settings > Social.
  2. Tap “Enable private session.”

Desktop app and web player

There are three ways to enable a Private session on desktop:

  • Click your profile picture then select “Private session.”
  • Or, click the “…” icon in the upper-left and go to File > Private session.
  • Or, go to Settings > Social and toggle “Start a private session to listen anonymously.”

Note that Private sessions only affect what other users see (or don’t see, rather). It doesn’t stop Spotify from tracking your activity—though as Wired points out, Spotify’s Privacy Policy vaguely implies Private Mode “may not influence” your recommendations, so it’s possible some data isn’t tracked while this mode is turned on. It’s better to use the privacy controls outlined in the sections above if you want to change how Spotify collects data.

How to limit third-party cookie tracking in Spotify

Turning on the privacy settings above will help reduce how much data Spotify tracks and uses for advertising and keep some of your Spotify listening history hidden from other users, but you should also take steps to limit how other apps and websites track your Spotify activity.

Image for article titled How to Limit Spotify From Tracking You, Because It Knows Too Much
Screenshot: Brendan Hesse

The desktop app has built-in cookie blocking controls that can do this:

  1. In the desktop app, click your username in the top right corner.
  2. Go to Settings > Show advanced settings.
  3. Scroll down to “Privacy” and turn on “Block all cookies for this installation of the Spotify desktop app.”
  4. Close and restart the app for the change to take effect.

For iOS and iPad users, you can disable app tracking in your device’s settings. Android users have a similar option, though it’s not as aggressive. And for those listening on the Spotify web player, use browsers with strict privacy controls like Safari, Firefox, or Brave.

The last resort: Delete your Spotify account

Even with all possible privacy settings turned on and Private Listening sessions enabled at all times, Spotify is still tracking your data. If that is absolutely unacceptable to you, the only real option is to delete your account. This will remove all your Spotify data for good—just make sure you download and back up any data you want to import to other services before you go through with it.

  1. Go to the Contact Spotify Support web page and sign in with your Spotify account.
  2. Select the “Account” section.
  3. Click “I want to close my account” from the list of options.
  4. Scroll down to the bottom of the page and click “Close Account.”
  5. Follow the on-screen prompts, clicking “Continue” each time to move forward.
  6. After the final confirmation, Spotify will send you an email with the cancellation link. Click the “Close My Account” button to verify you want to delete your account (this link is only active for 24 hours).

To be clear, we’re not advocating everyone go out and delete their Spotify accounts over the company’s privacy policy and advertising practices, but it’s always important to know how—and why—the apps and websites we use are tracking us. As we said at the top, even companies with the best intentions can fumble your data, unwittingly delivering it into the wrong hands.

Even if you’re cool with Spotify tracking you and don’t feel like enabling the options we’ve outlined in this guide, take a moment to tune up your account’s privacy with a strong password and two-factor sign-in, and remove any unnecessary info from your profile. These extra steps will help keep you safe if there’s ever an unexpected security breach.

Source: How to Limit Spotify From Tracking You, Because It Knows Too Much

China orders annual security reviews for all critical information infrastructure operators

An announcement by the Cyberspace Administration of China (CAC) said that cyber attacks are currently frequent in the Middle Kingdom, and the security challenges facing critical information infrastructure are severe. The announcement therefore defines infosec regulations and and responsibilities.

The CAC referred to critical infrastructure as “the nerve center of economic and social operations and the top priority of network security”. China’s definition of critical information infrastructure can be found in Article 2 of the State Council’s “Regulations on the Security Protection of Critical Information Infrastructure” and boils down to any system that could suffer significant damage from a cyber attack, and/or have such an attack damage society at large or even national security.

“The regulations clarify that important network facilities and information systems in key industries and fields belong to critical information infrastructure,” wrote the CAC in its announcement (as translated from Mandarin), adding that the state was adopting measures to monitor, defend and handle network risks and intrusions, originating domestically and globally.

The regulations themselves are lengthy and detailed, but the theme is that all Chinese enterprises whose operations depend on networks must conduct an annual security reviews, report breaches to government, and establish teams to monitor security constantly.

Those teams get to develop emergency plans and carry out emergency drills on a regular basis, in accordance with disaster management national plans.

If an incident is ever discovered, reporting and escalation to national authorities is mandatory.

The lengthy document also details a variety of organizational and logistical “clarifications”, while also outlining the state’s ability to adjust identification rules dynamically, how safeguarding measures can be implemented, and legal responsibilities and penalties for negligent parties.

[…]

Source: China orders annual security reviews for all critical information infrastructure operators • The Register

This sounds sensible. The Dutch NCSC has guidelines and an audit checklist recommending this, however this is not mandatory anywhere and very few companies actually use the monster checklist, let alone implement it. Nowadays this is not really acceptable behaviour any more.

MIT developed a low-cost prosthetic hand that can help amputees feel again

In a joint project with Shanghai Jiao Tong University, the school designed a neuroprosthetic that costs about $500 in components. It’s an inflatable hand made from an elastomer called EcoFlex and looks a bit like Baymax from Big Hero 6.

The device foregoes electric motors in favor of a pneumatic system that inflates and bends its balloon-like digits. The hand can assume various grasps that allow an amputee to subsequently do things like pet a cat, pour a carton of milk or even pick up a cupcake. The device translates how its wearer wants to use it through a software program that “decodes” the EMG signals the brain sends to an injured limb.

The prosthetic weighs about half a pound and can even restore some sense of feeling for its user. It does this with a series of pressure sensors. When the wearer touches or squeezes an object, they send an electric signal to a specific position on their amputated arm. Another advantage of the arm is it doesn’t take long to learn how to use it. After about 15 minutes, two volunteers found they could write with a pen and stack checkers.

“This is not a product yet, but the performance is already similar or superior to existing neuroprosthetics, which we’re excited about,” said Professor Xuanhe Zhao, one of the engineers who worked on the project. “There’s huge potential to make this soft prosthetic very low cost, for low-income families who have suffered from amputation.”

[…]

Source: MIT developed a low-cost prosthetic hand that can help amputees feel again | Engadget

Facebook says Russia-linked ad agency tried to smear Covid vaccines

Facebook said Tuesday that it has removed hundreds of accounts linked to a mysterious advertising agency operating out of Russia that sought to pay social media influencers to smear Covid-19 vaccines made by Pfizer and AstraZeneca.

A network of 65 Facebook accounts and 243 Instagram accounts was traced back to Fazze, an advertising and marketing firm working in Russia on behalf of an unknown client.

The network used fake accounts to spread misleading claims that disparaged the safety of the Pfizer and AstraZeneca vaccines. One claimed AstraZeneca’s shot would turn a person into a chimpanzee. The fake accounts targeted audiences in India, Latin America and, to a lesser extent, the U.S., using several social media platforms including Facebook and Instagram.

[…]

The Fazze network also contacted social media influencers in several countries with offers to pay them for reposting the misleading content. That ploy backfired when influencers in Germany and France exposed the network’s offer.

[…]

Fazze’s effort did not get much traction online, with some posts failing to get even a single response. But, while the campaign may have fizzled, it’s noteworthy because of its effort to enlist social media influencers, according to Nathaniel Gleicher, Facebook’s head of security policy.

“Although it was sloppy and didn’t have very good reach, it was an elaborate setup,” Gleicher said on a conference call announcing Tuesday’s actions.

[…]

Facebook investigators say some influencers did post the material, but later deleted it when stories about Fazze’s work began to emerge.

French YouTuber Léo Grasset was among those contacted by Fazze. He told The Associated Press in May that he was asked to post a 45- to 60-second video on Instagram, TikTok or YouTube criticizing the mortality rate of the Pfizer vaccine.

When Grasset asked Fazze to identify their client, the firm declined. Grasset refused the offer and went public with his concerns.

The offer from Fazze urged influencers not to mention that they were being paid, and also suggested they criticize the media’s reporting on vaccines.

[…]

Source: Facebook says Russia-linked ad agency tried to smear Covid vaccines

‘Easy money’: How international scam artists pulled off an epic theft of Covid benefits

[…]

Russian mobsters, Chinese hackers and Nigerian scammers have used stolen identities to plunder tens of billions of dollars in Covid benefits, spiriting the money overseas in a massive transfer of wealth from U.S. taxpayers, officials and experts say. And they say it is still happening.

Among the ripest targets for the cybertheft have been jobless programs. The federal government cannot say for sure how much of the more than $900 billion in pandemic-related unemployment relief has been stolen, but credible estimates range from $87 million to $400 billion — at least half of which went to foreign criminals, law enforcement officials say.

Those staggering sums dwarf, even on the low end, what the federal government spends every year on intelligence collection, food stamps or K-12 education.

“This is perhaps the single biggest organized fraud heist we’ve ever seen,” said security researcher Armen Najarian of the firm RSA, who tracked a Nigerian fraud ring as it allegedly siphoned millions of dollars out of more than a dozen states.

Jeremy Sheridan, who directs the office of investigations at the Secret Service, called it “the largest fraud scheme that I’ve ever encountered.”

“Due to the volume and pace at which these funds were made available and a lot of the requirements that were lifted in order to release them, criminals seized on that opportunity and were very, very successful — and continue to be successful,” he said.

While the enormous scope of Covid relief fraud has been clear for some time, scant attention has been paid to the role of organized foreign criminal groups, who move taxpayer money overseas via laundering schemes involving payment apps and “money mules,” law enforcement officials said.

“This is like letting people just walk right into Fort Knox and take the gold, and nobody even asked any questions,” said Blake Hall, the CEO of ID.me, which has contracts with 27 states to verify identities.

Officials and analysts say both domestic and foreign fraudsters took advantage of an already weak system of unemployment verification maintained by the states, which has been flagged for years by federal watchdogs. Adding to the vulnerability, states made it easier to apply for Covid benefits online during the pandemic, and officials felt pressure to expedite processing. The federal government also rolled out new benefits for contractors and gig workers that required no employer verification.

In that environment, crooks were easily able to impersonate jobless Americans using stolen identity information for sale in bulk in the dark corners of the internet. The data — birthdates, Social Security numbers, addresses and other private information — have accumulated online for years through huge data breaches, including hacks of Yahoo, LinkedIn, Facebook, Marriott and Experian.

At home, prison inmates and drug gangs got in on the action. But experts say the best-organized efforts came from abroad, with criminals from nearly every country swooping in to steal on an industrial scale.

[…]

Under the Pandemic Unemployment Assistance program for gig workers and contractors, people could apply for retroactive relief, claiming months of joblessness with no employer verification possible. In some cases, that meant checks or debit cards worth $20,000, Hall said.

“Organized crime has never had an opportunity where any American’s identity could be converted into $20,000, and it became their Super Bowl,” he said. “And these states were not equipped to do identity verification, certainly not remote identity verification. And in the first few months and still today, organized crime has just made these states a target.”

[…]

The investigative journalism site ProPublica calculated last month that from March to December 2020, the number of jobless claims added up to about two-thirds of the country’s labor force, when the actual unemployment rate was 23 percent. Although some people lose jobs more than once in a given year, that alone could not account for the vast disparity.

The thievery continues. Maryland, for example, in June detected more than half a million potentially fraudulent unemployment claims in May and June alone. Most of the attempts were blocked, but experts say that nationwide, many are still getting through.

The Biden administration has acknowledged the problem and blamed it on the Trump administration.

[…]

In a memo in February, the inspector general reported that as of December, 22 of 54 state and territorial workforce agencies were still not following its repeated recommendation to join a national data exchange to check Social Security numbers. And in July, the inspector general reported that the national association of state workforce agencies had not been sharing fraud data as required by federal regulations.

Twenty states failed to perform all the required database identity checks, and 44 states did not perform all recommended ones, the inspector general found.

“The states have been chronically underfunded for years — they’re running 1980s technology,” Hall said.

[…]

The FBI has opened about 2,000 investigations, Greenberg said, but it has recovered just $100 million. The Secret Service, which focuses on cyber and economic crimes, has clawed back $1.3 billion. But the vast majority of the pilfered funds are gone for good, experts say, including tens of billions of dollars sent out of the country through money-moving applications such as Cash.app.

[…]

One of the few examples in which analysts have pointed the finger at a specific foreign group involves a Nigerian fraud ring dubbed Scattered Canary by security researchers. The group had been committing cyberfraud for years when the pandemic benefits presented a ripe target, Najarian said.

[…]

Scattered Canary took advantage of a quirk in Google’s system. Gmail does not recognize dots in email addresses — John.Doe@gmail.com and JohnDoe@gmail.com are routed to the same account. But state unemployment systems treated them as distinct email addresses.

Exploiting that trait, the group was able to create dozens of fraudulent state unemployment accounts that funneled benefits to the same email address, according to research by Najarian and others at Agari.

In April and May of 2020, Scattered Canary filed at least 174 fraudulent claims for unemployment benefits with the state of Washington, Agari found — each claim eligible to receive up to $790 a week, for a total of $20,540 over 26 weeks. With the addition of the $600-per-week Covid supplement, the maximum potential loss was $4.7 million for those claims alone, Agari found.

[…]

Source: ‘Easy money’: How international scam artists pulled off an epic theft of Covid benefits

Secret terrorist watchlist with 2 million records exposed online

July this year, Security Discovery researcher Bob Diachenko came across a plethora of JSON records in an exposed Elasticsearch cluster that piqued his interest.

The 1.9 million-strong recordset contained sensitive information on people, including their names, country citizenship, gender, date of birth, passport details, and no-fly status.

The exposed server was indexed by search engines Censys and ZoomEye, indicating Diachenko may not have been the only person to come across the list:

exposed watchlist records
An excerpt from exposed watchlist records (Bob Diachenko)

The researcher told BleepingComputer that given the nature of the exposed fields (e.g. passport details and “no_fly_indicator”) it appeared to be a no-fly or a similar terrorist watchlist.

Additionally, the researcher noticed some elusive fields such as “tag,” “nomination type,” and “selectee indicator,” that weren’t immediately understood by him.

“That was the only valid guess given the nature of data plus there was a specific field named ‘TSC_ID’,” Diachenko told BleepingComputer, which hinted to him the source of the recordset could be the Terrorist Screening Center (TSC).

[…]

Source: Secret terrorist watchlist with 2 million records exposed online

If there are 2 million names on that list, isn’t the definition of ‘terrorist’ maybe a little bit broad?

T-Mobile Confirms It Was Hacked, lost full subscriber info for USA

T-Mobile confirmed hackers gained access to the telecom giant’s systems in an announcement published Monday.

The move comes after Motherboard reported that T-Mobile was investigating a post on an underground forum offering for sale Social Security Numbers and other private data. The forum post at the time didn’t name T-Mobile, but the seller told Motherboard the data came from T-Mobile servers.

[…]

Source: T-Mobile Confirms It Was Hacked

Debian 11 “bullseye” released

After 2 years, 1 month, and 9 days of development, the Debian project is proud to present its new stable version 11 (code name bullseye), which will be supported for the next 5 years thanks to the combined work of the Debian Security team and the Debian Long Term Support team.

Debian 11 bullseye ships with several desktop applications and environments. Amongst others it now includes the desktop environments:

  • Gnome 3.38,
  • KDE Plasma 5.20,
  • LXDE 11,
  • LXQt 0.16,
  • MATE 1.24,
  • Xfce 4.16.

This release contains over 11,294 new packages for a total count of 59,551 packages, along with a significant reduction of over 9,519 packages which were marked as obsolete and removed. 42,821 packages were updated and 5,434 packages remained unchanged.

bullseye becomes our first release to provide a Linux kernel with support for the exFAT filesystem and defaults to using it for mount exFAT filesystems. Consequently it is no longer required to use the filesystem-in-userspace implementation provided via the exfat-fuse package. Tools for creating and checking an exFAT filesystem are provided in the exfatprogs package.

Most modern printers are able to use driverless printing and scanning without the need for vendor specific (often non-free) drivers. bullseye brings forward a new package, ipp-usb, which uses the vendor neutral IPP-over-USB protocol supported by many modern printers. This allows a USB device to be treated as a network device. The official SANE driverless backend is provided by sane-escl in libsane1, which uses the eSCL protocol.

[…]

Source: Debian — News — Debian 11 “bullseye” released

Apple’s iPhone computer vision has the potential to preserve privacy but also break it completely

[…]

an AI on your phone will scan all those you have sent and will send to iPhotos. It will generate fingerprints that purportedly identify pictures, even if highly modified, that will be checked against fingerprints of known CSAM material. Too many of these – there’s a threshold – and Apple’s systems will let Apple staff investigate. They won’t get the pictures, but rather a voucher containing a version of the picture. But that’s not the picture, OK? If it all looks too dodgy, Apple will inform the authorities

[…]

In a blog post “Recognizing People in Photos Through Private On-Device Machine Learning” last month, Apple plumped itself up and strutted its funky stuff on how good its new person recognition process is. Obscured, oddly lit, accessorised, madly angled and other bizarrely presented faces are no problemo, squire.

By dint of extreme cleverness and lots of on-chip AI, Apple says it can efficiently recognise everyone in a gallery of photos. It even has a Hawkings-grade equation, just to show how serious it is, as proof that “finally, we rescale the obtained features by ss and use it as logit to compute the softmax cross-entropy loss based on the equation below.” Go, look. It’s awfully science-y.

The post is 3,500 words long, complex, and a very detailed paper on computer vision, one of the two tags Apple has given it. The other tag, Privacy, can be entirely summarised in six words: it’s on-device, therefore it’s private. No equation.

That would be more comforting if Apple hadn’t said days later how on-device analysis is going to be a key component in informing law enforcement agencies about things they disapprove of. Put the two together, and there’s a whole new and much darker angle to the fact, sold as a major consumer benefit, that Apple has been cramming in as much AI as it can so it can look at pictures as you take and after you’ve stored them.

We’ve all been worried about how mobile phones are stuffed with sensors that can watch what we watch, hear what we hear, track where we go and note what we do. The evolving world of personal data privacy is based around these not being stored in the vast vaults of big data, keeping them from being grist to the mill of manipulating our digital personas.

But what happens if the phone itself grinds that corn? It may never share a single photograph without your permission, but what if it can look at that photograph and generate precise metadata about what, who, how, when, and where it depicts?

This is an aspect of edge computing that is ahead of the regulators, even those of the EU who want to heavily control things like facial recognition. By the time any such regulation is produced, countless millions of devices will be using it to ostensibly provide safe, private, friendly on-device services that make taking and keeping photographs so much more convenient and fun.

It’s going to be very hard to turn that off, and very easy to argue for exemptions that weaken the regs to the point of pointlessness. Especially if the police and security services lobby hard as well, which they will as soon as they realise that this defeats end-to-end encryption without even touching end-to-end encryption.

So yes, Apple’s anti-CSAM model is capable of being used without impacting the privacy of the innocent, if it is run exactly as version 1.0 is described. It is also capable of working with the advances elsewhere in technology to break that privacy utterly, without setting off the tripwires of personal protection we’re putting in place right now.

[…]

Source: Apple’s iPhone computer vision has the potential to preserve privacy but also break it completely • The Register

Etherium gets rid of miners and electricity costs in 2022 update

Ethereum is making big changes. Perhaps the most important is the jettisoning of the “miners” who track and validate transactions on the world’s most-used blockchain network. Miners are the heart of a system known as proof of work. It was pioneered by Bitcoin and adopted by Ethereum, and has come under increasing criticism for its environmental impact: Bitcoin miners now use as much electricity as some small nations. Along with being greener and faster, proponents say the switch, now planned to be phased in by early 2022, will illustrate another difference between Ethereum and Bitcoin: A willingness to change, and to see the network as a product of community as much as code.

[…]

the system’s electricity usage is now enormous: Researchers at Cambridge University say that the Bitcoin network’s annual electric bill often exceeds that of countries such as Chile and Bangladesh. This has led to calls from environmentally conscious investors, including cryptocurrency booster Elon Musk and others, to shun Bitcoin and Ethereum and any coins that use proof of work. It’s also led to a growing dominance by huge, centralized mining farms that’s antithetical to a system that was designed to be decentralized, since a blockchain could in theory be rewritten by a party that controlled a majority of mining power.

[…]

The idea behind proof of stake is that the blockchain can be secured more simply if you give a group of people carrot-and-stick incentives to collaborate in checking and crosschecking transactions. It works like this:

* Anyone who puts up, or stakes, 32 Ether can take part. (Ether, the coin used to operate the Ethereum system, reached values of over $4,000 in May.)

* People in that pool are chosen at random to be “validators” of a batch of transactions, a role that requires them to order the transactions and propose the resulting block to the network.

* Validators share that new chunk of blockchain with a group of members of the pool who are chosen to be “attestors.” A minimum of 128 attestors are required for any given block procedure.

* The attestors review the validator’s work and either accept it or reject it. If it’s accepted, both the validators and the attestors are given free Ether.

5. What are the system’s advantages?

It’s thought that switching to proof of stake would cuts Ethereum’s energy use, estimated at 45,000 gigawatt hours by 99.9%. Like any other venture depending on cloud computing, its carbon footprint would then be only be that of its servers. It also is expected to increase the network speed. That’s important for Ethereum, which has ambitions of becoming a platform for a vast range of financial and commercial transactions. Currently, Ethereum handles about 30 transactions per second. With sharding, Vitalik Buterin, the inventor of Ethereum, thinks that could go to 100,000 per second.

6. What are its downsides?

In a proof of stake system, it would be harder than in a proof of work system for a group to gain control of the process, but it would still be possible: The more Ether a person or group stakes, the better the chance of being chosen as a validator or attestor. Economic disincentives have been put in place to dissuade behavior that is bad for the network. A validator that tries to manipulate the process could lose part of the 32 Ether they have staked, for example. Wilson Withiam, a senior research analyst at Messari, a crypto research firm, who specializes in blockchain protocols, said the problem lies at the heart of the challenge of decentralized systems. “This is one of the most important questions going forward,” he said. “How do you help democratize the staking system?”

7. How else is Ethereum changing?

The most recent change was called the London hard fork, which went into effect in early August. The biggest change to the Ethereum blockchain since 2015, the London hard fork included a fee reduction feature called EIP 1559. The fee cut reduces the supply of Ether as part of every transaction, creating the possibility that Ethereum could become deflationary. As of mid-August, 3.2 ether per minute were being destroyed because of EIP 1559, according to tracking website ultrasound.money. That could put upward pressure on the price of Ether going forward. Another change in the works is called sharding, which will divide the Ethereum network into 64 geographic regions. Transactions within a shard would be processed separately, and the results would then be reconciled with a main network linked to all the other shards, making the overall network much faster.

[…]

Source: Bye-Bye, Miners! How Ethereum’s Big Change Will Work – Bloomberg

Lamborghini Countach LPI800-4 Hybrid v12

The Lamborghini Countach LPI800-4 is a futuristic limited edition that pays homage to the original and recreated for the 21st century. Head of design a Lamborghini Mitja Borkert took cues from the various iterations of the Countach to inspire his latest creation. The Countach’s distinctive wedge-shapes silhouette has been retained, with a single line from the nose to the tail, a design trait that runs through all V12 Lambos.

The final outline references the first LP500 and LP400 production versions. The face was inspired by the Quattrovalvole edition and the wheel arches have a hexagonal theme. There is no fixed rear wing as seen in later designs of the Countach. The distinctive NACA air intakes are cut into the side and doors of the Countach LPI800-4. Access for occupants is via the famous scissor doors, first introduced on the Countach and a Lamborghini V12 signature.

Under the slatted engine cover is, naturally, a V12 engine that can rev to almost 9 000 r/min. The 6,5-litre engine is naturally aspirated but it does have an electrical boost component integrated into the transmission that is powered by a supercapacitor. Total system power output is rated as 600 kW. Like all the modern V12 Lambos power is directed to all four wheels. Lamborghini says the Countach can blitz the 0-100 km/h run in just 2,8 seconds, can complete the 0-200 km/h dash in 8,6 seconds and it has a top speed of 355 km/h.

Source: Lamborghini Countach LPI800-4 Debuts [w/video] – Double Apex

Absolutely gorgeous!

Rockstar Begins A War On Modders For ‘GTA’ Games For Totally Unclear Reasons

[…]Rockstar Games has previously had its own run-in with its modding community, banning modders who attempted to shift GTA5’s online gameplay to dedicated servers that would allow mods to be used, since Rockstar’s servers don’t allow mods. What it’s now doing in issuing copyright notices on modders who have been forklifting older Rockstar assets into newer GTA games, however, is totally different.

Grand Theft Auto publisher Take-Two has issued copyright takedown notices for several mods on LibertyCity.net, according to a post from the site. The mods either inserted content from older Rockstar games into newer ones, or combined content from similar Rockstar games into one larger game. The mods included material from Grand Theft Auto 3, San Andreas, Vice City, Mahunt, and Bully.

This has been a legally active year for Take-Two, starting with takedown notices for reverse-engineered versions of GTA3 and Vice City. Those projects were later restored. Since then, Take-Two has issued takedowns for mods that move content from older Grand Theft Auto games into GTA5, as well as mods that combine older games from the GTA3 generation into one. That lead to a group of modders preemptively taking down their 14-year-old mod for San Andreas in case they were next on Take-Two’s list.

All of this is partially notable because it’s new. Like many games released for the PC, the GTA series has enjoyed a healthy modding community. And Rockstar, previously, has largely left this modding community alone. Which is generally smart, as mods such as the ones the community produces are fantastic ways to both keep a game fresh as it ages and lure in new players to the original game by enticing them with mods that meet their particular interests. I’ll never forget a Doom mod that replaced all of the original MIDI soundtrack files with MIDI versions of 90’s alternative grunge music. That mod caused me to play Doom all over again from start to finish.

But now Rockstar Games has flipped the script and is busily taking these fan mods down. Why? Well, no one is certain, but likely for the most obvious reason of all.

One reason a company might become more concerned with this kind of copyright infringement is that it’s planning to release a similar product and wants to be sure that its claim to the material can’t be challenged. It’s speculative at this point, but that tracks with the rumors we heard earlier this year that Take-Two is working on remakes of the PS2 Grand Theft Auto games.

In other words, Rockstar appears to be completely happy to reap all the benefits from the modding community right up until the moment it thinks it can make more money with re-releases, at which point the company cries “Copyright!” The company may well be within its rights to operate that way, but why in the world would the modding community ever work on Rockstar games again?

Source: Rockstar Begins A War On Modders For ‘GTA’ Games For Totally Unclear Reasons | Techdirt

Senators ask Amazon how it will use palm print data from its stores

If you’re concerned that Amazon might misuse palm print data from its One service, you’re not alone. TechCrunch reports that Senators Amy Klobuchar, Bill Cassidy and Jon Ossoff have sent a letter to new Amazon chief Andy Jassy asking him to explain how the company might expand use of One’s palm print system beyond stores like Amazon Go and Whole Foods. They’re also worried the biometric payment data might be used for more than payments, such as for ads and tracking.

The politicians are concerned that Amazon One reportedly uploads palm print data to the cloud, creating “unique” security issues. The move also casts doubt on Amazon’s “respect” for user privacy, the senators said.

In addition to asking about expansion plans, the senators wanted Jassy to outline the number of third-party One clients, the privacy protections for those clients and their customers and the size of the One user base. The trio gave Amazon until August 26th to provide an answer.

[…]

The company has offered $10 in credit to potential One users, raising questions about its eagerness to collect palm print data. This also isn’t the first time Amazon has clashed with government

[…]

Amazon declined to comment, but pointed to an earlier blog post where it said One palm images were never stored on-device and were sent encrypted to a “highly secure” cloud space devoted just to One content.

Source: Senators ask Amazon how it will use palm print data from its stores (updated) | Engadget

Basically having these palm prints all in the cloud is really an incredibly insecure way to keep all this biometric data of people that they can’t ever change, short of burning their palms off.

Poly Network Offers $500k Reward to Hacker Who Stole $611 Million and then returned it

A cryptocurrency platform that was hacked and had hundreds of millions of dollars stolen from it has now offered the thief a “reward” of $500,000 after the criminal returned almost all of the money.

A few days ago a hacker exploited a vulnerability in the blockchain technology of decentralized finance (DeFi) platform Poly Network, pilfering a whopping $611 million in various tokens—the crypto equivalent of a gargantuan bank robbery. It is thought to be the largest robbery of its kind in DeFi history.

The company subsequently posted an absurd open letter to the thief that began “Dear Hacker” and proceeded to beg for its money back while also insinuating that the criminal would ultimately be caught by police.

Amazingly, this tactic seemed to work—and the hacker (or hackers) began returning the crypto. As of Friday, almost the entirety of the massive haul had been returned to blockchain accounts controlled by the company, though a sizable $33 million in Tether coin still remains frozen in an account solely controlled by the thief.

After this, Poly weirdly started calling the hacker “Mr. White Hat”—essentially dubbing them a virtuous penetration tester rather than a disruptive criminal. Even more strange, on Friday Poly Network confirmed to Reuters that it had offered $500,000 to the cybercriminal, dubbing it a “bug bounty.”

Bug bounties are programs wherein a company will pay cyber-pros to find holes in its IT defenses. However, such programs are typically commissioned by companies and addressed by well-known infosec professionals, not conducted unprompted and ad-hoc by rogue, anonymous hackers. Similarly, I’ve never heard of a penetration tester stealing hundreds of millions of dollars from a company as part of their test.

Nonetheless, Poly Network apparently told the hacker: “Since, we (Poly Network) believe your action is white hat behavior, we plan to offer you a $500,000 bug bounty after you complete the refund fully. Also we assure you that you will not be accountable for this incident.” We reached out to the company to try to independently confirm these reports.

The hacker reportedly refused to take the crypto platform up on its offer, opting instead to post a series of public messages in one of the crypto wallets that was used to return funds. Dubbed “Q & A sessions,” the posts purport to explain why the heist took place. The self-interviews were shared over social media by Tom Robinson, co-founder of crypto-tracking firm Elliptic. In one of them, the hacker explains:

Q: WHY HACKING?
A: FOR FUN 🙂

Q: WHY POLY NETWORK?
A: CROSS CHAIN HACKING IS HOT

Q: WHY TRANSFERRING TOKENS
A: TO KEEP IT SAFE.

In another post, the hacker purportedly proclaimed, “I’m not interested in money!” and said, “I would like to give them tips on how to secure their networks,” apparently referencing the blockchain provider.

So, yeah, what do we think here, folks? Is the hacker:

  • A) a good samaritan who stole the better part of a billion dollars to teach a crypto company a lesson?
  • B) a spineless weasel who realized they were in tremendous levels of shit and decided to engineer a way out of their criminal deed?

The answer is unclear at the moment, but gee, does it make for quality entertainment. Tune in next week for a new episode of Misadventures in De-Fi Cybersecurity. Thrilling stuff, no?

Source: Poly Network Offers Reward to Hacker Who Stole $611 Million

Engineers make critical advance in quantum computer design

They discovered a new technique they say will be capable of controlling millions of spin qubits—the basic units of information in a silicon quantum processor.

Until now, quantum computer engineers and scientists have worked with a proof-of-concept model of quantum processors by demonstrating the control of only a handful of qubits.

[…]

“Up until this point, controlling electron spin qubits relied on us delivering microwave magnetic fields by putting a current through a wire right beside the ,” Dr. Pla says.

“This poses some real challenges if we want to scale up to the millions of qubits that a quantum computer will need to solve globally significant problems, such as the design of new vaccines.

“First off, the magnetic fields drop off really quickly with distance, so we can only control those qubits closest to the wire. That means we would need to add more and more wires as we brought in more and more qubits, which would take up a lot of real estate on the chip.”

And since the chip must operate at freezing cold temperatures, below -270°C, Dr. Pla says introducing more wires would generate way too much heat in the chip, interfering with the reliability of the qubits.

[…]

Rather than having thousands of control wires on the same thumbnail-sized silicon chip that also needs to contain millions of qubits, the team looked at the feasibility of generating a from above the chip that could manipulate all of the qubits simultaneously.

[…]

Dr. Pla and the team introduced a new component directly above the silicon chip—a crystal prism called a dielectric resonator. When microwaves are directed into the resonator, it focuses the wavelength of the microwaves down to a much smaller size.

“The dielectric resonator shrinks the wavelength down below one millimeter, so we now have a very efficient conversion of microwave power into the magnetic field that controls the spins of all the qubits.

“There are two key innovations here. The first is that we don’t have to put in a lot of power to get a strong driving field for the qubits, which crucially means we don’t generate much heat. The second is that the field is very uniform across the chip, so that millions of qubits all experience the same level of control.”

[…]

Source: Engineers make critical advance in quantum computer design