About Robin Edgar

Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft

Apple’s iPhone computer vision has the potential to preserve privacy but also break it completely

[…]

an AI on your phone will scan all those you have sent and will send to iPhotos. It will generate fingerprints that purportedly identify pictures, even if highly modified, that will be checked against fingerprints of known CSAM material. Too many of these – there’s a threshold – and Apple’s systems will let Apple staff investigate. They won’t get the pictures, but rather a voucher containing a version of the picture. But that’s not the picture, OK? If it all looks too dodgy, Apple will inform the authorities

[…]

In a blog post “Recognizing People in Photos Through Private On-Device Machine Learning” last month, Apple plumped itself up and strutted its funky stuff on how good its new person recognition process is. Obscured, oddly lit, accessorised, madly angled and other bizarrely presented faces are no problemo, squire.

By dint of extreme cleverness and lots of on-chip AI, Apple says it can efficiently recognise everyone in a gallery of photos. It even has a Hawkings-grade equation, just to show how serious it is, as proof that “finally, we rescale the obtained features by ss and use it as logit to compute the softmax cross-entropy loss based on the equation below.” Go, look. It’s awfully science-y.

The post is 3,500 words long, complex, and a very detailed paper on computer vision, one of the two tags Apple has given it. The other tag, Privacy, can be entirely summarised in six words: it’s on-device, therefore it’s private. No equation.

That would be more comforting if Apple hadn’t said days later how on-device analysis is going to be a key component in informing law enforcement agencies about things they disapprove of. Put the two together, and there’s a whole new and much darker angle to the fact, sold as a major consumer benefit, that Apple has been cramming in as much AI as it can so it can look at pictures as you take and after you’ve stored them.

We’ve all been worried about how mobile phones are stuffed with sensors that can watch what we watch, hear what we hear, track where we go and note what we do. The evolving world of personal data privacy is based around these not being stored in the vast vaults of big data, keeping them from being grist to the mill of manipulating our digital personas.

But what happens if the phone itself grinds that corn? It may never share a single photograph without your permission, but what if it can look at that photograph and generate precise metadata about what, who, how, when, and where it depicts?

This is an aspect of edge computing that is ahead of the regulators, even those of the EU who want to heavily control things like facial recognition. By the time any such regulation is produced, countless millions of devices will be using it to ostensibly provide safe, private, friendly on-device services that make taking and keeping photographs so much more convenient and fun.

It’s going to be very hard to turn that off, and very easy to argue for exemptions that weaken the regs to the point of pointlessness. Especially if the police and security services lobby hard as well, which they will as soon as they realise that this defeats end-to-end encryption without even touching end-to-end encryption.

So yes, Apple’s anti-CSAM model is capable of being used without impacting the privacy of the innocent, if it is run exactly as version 1.0 is described. It is also capable of working with the advances elsewhere in technology to break that privacy utterly, without setting off the tripwires of personal protection we’re putting in place right now.

[…]

Source: Apple’s iPhone computer vision has the potential to preserve privacy but also break it completely • The Register

Etherium gets rid of miners and electricity costs in 2022 update

Ethereum is making big changes. Perhaps the most important is the jettisoning of the “miners” who track and validate transactions on the world’s most-used blockchain network. Miners are the heart of a system known as proof of work. It was pioneered by Bitcoin and adopted by Ethereum, and has come under increasing criticism for its environmental impact: Bitcoin miners now use as much electricity as some small nations. Along with being greener and faster, proponents say the switch, now planned to be phased in by early 2022, will illustrate another difference between Ethereum and Bitcoin: A willingness to change, and to see the network as a product of community as much as code.

[…]

the system’s electricity usage is now enormous: Researchers at Cambridge University say that the Bitcoin network’s annual electric bill often exceeds that of countries such as Chile and Bangladesh. This has led to calls from environmentally conscious investors, including cryptocurrency booster Elon Musk and others, to shun Bitcoin and Ethereum and any coins that use proof of work. It’s also led to a growing dominance by huge, centralized mining farms that’s antithetical to a system that was designed to be decentralized, since a blockchain could in theory be rewritten by a party that controlled a majority of mining power.

[…]

The idea behind proof of stake is that the blockchain can be secured more simply if you give a group of people carrot-and-stick incentives to collaborate in checking and crosschecking transactions. It works like this:

* Anyone who puts up, or stakes, 32 Ether can take part. (Ether, the coin used to operate the Ethereum system, reached values of over $4,000 in May.)

* People in that pool are chosen at random to be “validators” of a batch of transactions, a role that requires them to order the transactions and propose the resulting block to the network.

* Validators share that new chunk of blockchain with a group of members of the pool who are chosen to be “attestors.” A minimum of 128 attestors are required for any given block procedure.

* The attestors review the validator’s work and either accept it or reject it. If it’s accepted, both the validators and the attestors are given free Ether.

5. What are the system’s advantages?

It’s thought that switching to proof of stake would cuts Ethereum’s energy use, estimated at 45,000 gigawatt hours by 99.9%. Like any other venture depending on cloud computing, its carbon footprint would then be only be that of its servers. It also is expected to increase the network speed. That’s important for Ethereum, which has ambitions of becoming a platform for a vast range of financial and commercial transactions. Currently, Ethereum handles about 30 transactions per second. With sharding, Vitalik Buterin, the inventor of Ethereum, thinks that could go to 100,000 per second.

6. What are its downsides?

In a proof of stake system, it would be harder than in a proof of work system for a group to gain control of the process, but it would still be possible: The more Ether a person or group stakes, the better the chance of being chosen as a validator or attestor. Economic disincentives have been put in place to dissuade behavior that is bad for the network. A validator that tries to manipulate the process could lose part of the 32 Ether they have staked, for example. Wilson Withiam, a senior research analyst at Messari, a crypto research firm, who specializes in blockchain protocols, said the problem lies at the heart of the challenge of decentralized systems. “This is one of the most important questions going forward,” he said. “How do you help democratize the staking system?”

7. How else is Ethereum changing?

The most recent change was called the London hard fork, which went into effect in early August. The biggest change to the Ethereum blockchain since 2015, the London hard fork included a fee reduction feature called EIP 1559. The fee cut reduces the supply of Ether as part of every transaction, creating the possibility that Ethereum could become deflationary. As of mid-August, 3.2 ether per minute were being destroyed because of EIP 1559, according to tracking website ultrasound.money. That could put upward pressure on the price of Ether going forward. Another change in the works is called sharding, which will divide the Ethereum network into 64 geographic regions. Transactions within a shard would be processed separately, and the results would then be reconciled with a main network linked to all the other shards, making the overall network much faster.

[…]

Source: Bye-Bye, Miners! How Ethereum’s Big Change Will Work – Bloomberg

Lamborghini Countach LPI800-4 Hybrid v12

The Lamborghini Countach LPI800-4 is a futuristic limited edition that pays homage to the original and recreated for the 21st century. Head of design a Lamborghini Mitja Borkert took cues from the various iterations of the Countach to inspire his latest creation. The Countach’s distinctive wedge-shapes silhouette has been retained, with a single line from the nose to the tail, a design trait that runs through all V12 Lambos.

The final outline references the first LP500 and LP400 production versions. The face was inspired by the Quattrovalvole edition and the wheel arches have a hexagonal theme. There is no fixed rear wing as seen in later designs of the Countach. The distinctive NACA air intakes are cut into the side and doors of the Countach LPI800-4. Access for occupants is via the famous scissor doors, first introduced on the Countach and a Lamborghini V12 signature.

Under the slatted engine cover is, naturally, a V12 engine that can rev to almost 9 000 r/min. The 6,5-litre engine is naturally aspirated but it does have an electrical boost component integrated into the transmission that is powered by a supercapacitor. Total system power output is rated as 600 kW. Like all the modern V12 Lambos power is directed to all four wheels. Lamborghini says the Countach can blitz the 0-100 km/h run in just 2,8 seconds, can complete the 0-200 km/h dash in 8,6 seconds and it has a top speed of 355 km/h.

Source: Lamborghini Countach LPI800-4 Debuts [w/video] – Double Apex

Absolutely gorgeous!

Rockstar Begins A War On Modders For ‘GTA’ Games For Totally Unclear Reasons

[…]Rockstar Games has previously had its own run-in with its modding community, banning modders who attempted to shift GTA5’s online gameplay to dedicated servers that would allow mods to be used, since Rockstar’s servers don’t allow mods. What it’s now doing in issuing copyright notices on modders who have been forklifting older Rockstar assets into newer GTA games, however, is totally different.

Grand Theft Auto publisher Take-Two has issued copyright takedown notices for several mods on LibertyCity.net, according to a post from the site. The mods either inserted content from older Rockstar games into newer ones, or combined content from similar Rockstar games into one larger game. The mods included material from Grand Theft Auto 3, San Andreas, Vice City, Mahunt, and Bully.

This has been a legally active year for Take-Two, starting with takedown notices for reverse-engineered versions of GTA3 and Vice City. Those projects were later restored. Since then, Take-Two has issued takedowns for mods that move content from older Grand Theft Auto games into GTA5, as well as mods that combine older games from the GTA3 generation into one. That lead to a group of modders preemptively taking down their 14-year-old mod for San Andreas in case they were next on Take-Two’s list.

All of this is partially notable because it’s new. Like many games released for the PC, the GTA series has enjoyed a healthy modding community. And Rockstar, previously, has largely left this modding community alone. Which is generally smart, as mods such as the ones the community produces are fantastic ways to both keep a game fresh as it ages and lure in new players to the original game by enticing them with mods that meet their particular interests. I’ll never forget a Doom mod that replaced all of the original MIDI soundtrack files with MIDI versions of 90’s alternative grunge music. That mod caused me to play Doom all over again from start to finish.

But now Rockstar Games has flipped the script and is busily taking these fan mods down. Why? Well, no one is certain, but likely for the most obvious reason of all.

One reason a company might become more concerned with this kind of copyright infringement is that it’s planning to release a similar product and wants to be sure that its claim to the material can’t be challenged. It’s speculative at this point, but that tracks with the rumors we heard earlier this year that Take-Two is working on remakes of the PS2 Grand Theft Auto games.

In other words, Rockstar appears to be completely happy to reap all the benefits from the modding community right up until the moment it thinks it can make more money with re-releases, at which point the company cries “Copyright!” The company may well be within its rights to operate that way, but why in the world would the modding community ever work on Rockstar games again?

Source: Rockstar Begins A War On Modders For ‘GTA’ Games For Totally Unclear Reasons | Techdirt

Senators ask Amazon how it will use palm print data from its stores

If you’re concerned that Amazon might misuse palm print data from its One service, you’re not alone. TechCrunch reports that Senators Amy Klobuchar, Bill Cassidy and Jon Ossoff have sent a letter to new Amazon chief Andy Jassy asking him to explain how the company might expand use of One’s palm print system beyond stores like Amazon Go and Whole Foods. They’re also worried the biometric payment data might be used for more than payments, such as for ads and tracking.

The politicians are concerned that Amazon One reportedly uploads palm print data to the cloud, creating “unique” security issues. The move also casts doubt on Amazon’s “respect” for user privacy, the senators said.

In addition to asking about expansion plans, the senators wanted Jassy to outline the number of third-party One clients, the privacy protections for those clients and their customers and the size of the One user base. The trio gave Amazon until August 26th to provide an answer.

[…]

The company has offered $10 in credit to potential One users, raising questions about its eagerness to collect palm print data. This also isn’t the first time Amazon has clashed with government

[…]

Amazon declined to comment, but pointed to an earlier blog post where it said One palm images were never stored on-device and were sent encrypted to a “highly secure” cloud space devoted just to One content.

Source: Senators ask Amazon how it will use palm print data from its stores (updated) | Engadget

Basically having these palm prints all in the cloud is really an incredibly insecure way to keep all this biometric data of people that they can’t ever change, short of burning their palms off.

Poly Network Offers $500k Reward to Hacker Who Stole $611 Million and then returned it

A cryptocurrency platform that was hacked and had hundreds of millions of dollars stolen from it has now offered the thief a “reward” of $500,000 after the criminal returned almost all of the money.

A few days ago a hacker exploited a vulnerability in the blockchain technology of decentralized finance (DeFi) platform Poly Network, pilfering a whopping $611 million in various tokens—the crypto equivalent of a gargantuan bank robbery. It is thought to be the largest robbery of its kind in DeFi history.

The company subsequently posted an absurd open letter to the thief that began “Dear Hacker” and proceeded to beg for its money back while also insinuating that the criminal would ultimately be caught by police.

Amazingly, this tactic seemed to work—and the hacker (or hackers) began returning the crypto. As of Friday, almost the entirety of the massive haul had been returned to blockchain accounts controlled by the company, though a sizable $33 million in Tether coin still remains frozen in an account solely controlled by the thief.

After this, Poly weirdly started calling the hacker “Mr. White Hat”—essentially dubbing them a virtuous penetration tester rather than a disruptive criminal. Even more strange, on Friday Poly Network confirmed to Reuters that it had offered $500,000 to the cybercriminal, dubbing it a “bug bounty.”

Bug bounties are programs wherein a company will pay cyber-pros to find holes in its IT defenses. However, such programs are typically commissioned by companies and addressed by well-known infosec professionals, not conducted unprompted and ad-hoc by rogue, anonymous hackers. Similarly, I’ve never heard of a penetration tester stealing hundreds of millions of dollars from a company as part of their test.

Nonetheless, Poly Network apparently told the hacker: “Since, we (Poly Network) believe your action is white hat behavior, we plan to offer you a $500,000 bug bounty after you complete the refund fully. Also we assure you that you will not be accountable for this incident.” We reached out to the company to try to independently confirm these reports.

The hacker reportedly refused to take the crypto platform up on its offer, opting instead to post a series of public messages in one of the crypto wallets that was used to return funds. Dubbed “Q & A sessions,” the posts purport to explain why the heist took place. The self-interviews were shared over social media by Tom Robinson, co-founder of crypto-tracking firm Elliptic. In one of them, the hacker explains:

Q: WHY HACKING?
A: FOR FUN 🙂

Q: WHY POLY NETWORK?
A: CROSS CHAIN HACKING IS HOT

Q: WHY TRANSFERRING TOKENS
A: TO KEEP IT SAFE.

In another post, the hacker purportedly proclaimed, “I’m not interested in money!” and said, “I would like to give them tips on how to secure their networks,” apparently referencing the blockchain provider.

So, yeah, what do we think here, folks? Is the hacker:

  • A) a good samaritan who stole the better part of a billion dollars to teach a crypto company a lesson?
  • B) a spineless weasel who realized they were in tremendous levels of shit and decided to engineer a way out of their criminal deed?

The answer is unclear at the moment, but gee, does it make for quality entertainment. Tune in next week for a new episode of Misadventures in De-Fi Cybersecurity. Thrilling stuff, no?

Source: Poly Network Offers Reward to Hacker Who Stole $611 Million

Engineers make critical advance in quantum computer design

They discovered a new technique they say will be capable of controlling millions of spin qubits—the basic units of information in a silicon quantum processor.

Until now, quantum computer engineers and scientists have worked with a proof-of-concept model of quantum processors by demonstrating the control of only a handful of qubits.

[…]

“Up until this point, controlling electron spin qubits relied on us delivering microwave magnetic fields by putting a current through a wire right beside the ,” Dr. Pla says.

“This poses some real challenges if we want to scale up to the millions of qubits that a quantum computer will need to solve globally significant problems, such as the design of new vaccines.

“First off, the magnetic fields drop off really quickly with distance, so we can only control those qubits closest to the wire. That means we would need to add more and more wires as we brought in more and more qubits, which would take up a lot of real estate on the chip.”

And since the chip must operate at freezing cold temperatures, below -270°C, Dr. Pla says introducing more wires would generate way too much heat in the chip, interfering with the reliability of the qubits.

[…]

Rather than having thousands of control wires on the same thumbnail-sized silicon chip that also needs to contain millions of qubits, the team looked at the feasibility of generating a from above the chip that could manipulate all of the qubits simultaneously.

[…]

Dr. Pla and the team introduced a new component directly above the silicon chip—a crystal prism called a dielectric resonator. When microwaves are directed into the resonator, it focuses the wavelength of the microwaves down to a much smaller size.

“The dielectric resonator shrinks the wavelength down below one millimeter, so we now have a very efficient conversion of microwave power into the magnetic field that controls the spins of all the qubits.

“There are two key innovations here. The first is that we don’t have to put in a lot of power to get a strong driving field for the qubits, which crucially means we don’t generate much heat. The second is that the field is very uniform across the chip, so that millions of qubits all experience the same level of control.”

[…]

Source: Engineers make critical advance in quantum computer design

The End Of Ownership: How Big Companies Are Trying To Turn Everyone Into Renters

We’ve talked a lot on Techdirt about the end of ownership, and how companies have increasingly been reaching deep into products that you thought you bought to modify them… or even destroy them. Much of this originated in the copyright space, in which modern copyright law (somewhat ridiculously) gave the power to copyright holders to break products that people had “bought.” Of course, the legacy copyright players like to conveniently change their language on whether or not you’re buying something or simply “licensing” it temporarily based on what’s most convenient (i.e., what makes them the most money) at the time.

Over at the Nation, Maria Bustillos, recently wrote about how legacy companies — especially in the publishing world — are trying to take away the concept of book ownership and only let people rent books. A little over a year ago, picking up an idea first highlighted by law professor Brian Frye, we highlighted how much copyright holders want to be landlords. They don’t want to sell products to you. They want to retain an excessive level of control and power over it — and to make you keep paying for stuff you thought you bought. They want those monopoly rents.

As Bustillos points out, the copyright holders are making things disappear, including “ownership.”

Maybe you’ve noticed how things keep disappearing—or stop working—when you “buy” them online from big platforms like Netflix and Amazon, Microsoft and Apple. You can watch their movies and use their software and read their books—but only until they decide to pull the plug. You don’t actually own these things—you can only rent them. But the titanic amount of cultural information available at any given moment makes it very easy to let that detail slide. We just move on to the next thing, and the next, without realizing that we don’t—and, increasingly, can’t—own our media for keeps.

And while most of the focus on this space has been around music and movies, it’s happening to books as well:

Unfortunately, today’s mega-publishers and book distributors have glommed on to the notion of “expiring” media, and they would like to normalize that temporary, YouTube-style notion of a “library.” That’s why, last summer, four of the world’s largest publishers sued the Internet Archive over its National Emergency Library, a temporary program of the Internet Archive’s Open Library intended to make books available to the millions of students in quarantine during the pandemic. Even though the Internet Archive closed the National Emergency Library in response to the lawsuit, the publishers refused to stand down; what their lawsuit really seeks is the closing of the whole Open Library, and the destruction of its contents. (The suit is ongoing and is expected to resume later this year.) A close reading of the lawsuit indicates that what these publishers are looking to achieve is an end to the private ownership of books—not only for the Internet Archive but for everyone.

[…]

The big publishers and other large copyright holders always insist that they’re “protecting artists.” That’s almost never the case. They regularly destroy and suppress creativity and art with their abuse of copyright law. Culture shouldn’t have to be rented, especially when the landlords don’t care one bit about the underlying art or cultural impact.

Source: The End Of Ownership: How Big Companies Are Trying To Turn Everyone Into Renters | Techdirt

Boffins propose Pretty Good Phone Privacy to end pretty invasive location data harvesting by telcos

[…] In “Pretty Good Phone Privacy,” [PDF] a paper scheduled to be presented on Thursday at the Usenix Security Symposium, Schmitt and Barath Raghavan, assistant professor of computer science at the University of Southern California, describe a way to re-engineer the mobile network software stack so that it doesn’t betray the location of mobile network customers.

“It’s always been thought that since cell towers need to talk to phones then all users have to accept the status quo in which mobile operators track our every movement and sell the data to data brokers (as has been extensively reported),” said Schmitt. “We show how it’s possible to protect users’ mobile privacy while at the same time providing normal connectivity, and to do so without changing any of the hardware in mobile networks.”

In recent years, mobile carriers have been routinely selling and leaking location data, to the detriment of customer privacy. Efforts to alter the status quo have been hampered by an uneven regulatory landscape, the resistance of data brokers that profit from the status quo, and the assumption that cellular network architecture requires knowing where customers are located.

[…]

The purpose of Pretty Good Phone Privacy (PGPP) is to avoid using a unique identifier for authenticating customers and granting access to the network. It’s a technology that allows a Mobile Virtual Network Operator (MVNO) to issue SIM cards with identical SUPIs for every subscriber because the SUPI is only used to assess the validity of the SIM card. The PGPP network can then assign an IP address and a GUTI (Globally Unique Temporary Identifier) that can change in subsequent sessions, without telling the MVNO where the customer is located.

“We decouple network connectivity from authentication and billing, which allows the carrier to run Next Generation Core (NGC) services that are unaware of the identity or location of their users but while still authenticating them for network use,” the paper explains. “Our architectural change allows us to nullify the value of the user’s SUPI, an often targeted identifier in the cellular ecosystem, as a unique identifier.”

[…]

Its primary focus is defending against the surreptitious sale of location data by network providers.

[…]

Schmitt argues PGPP will help mobile operators comply with current and emerging data privacy regulations in US states like California, Colorado, and Virginia, and post-GDPR rules in Europe

Source: Boffins propose Pretty Good Phone Privacy to end pretty invasive location data harvesting by telcos • The Register

Hackers return around half of stolen $600 million in Poly Network hack

Hackers have returned nearly half of the $600 million they stole in what’s likely to be one of the biggest cryptocurrency thefts ever.

The cybercriminals exploited a vulnerability in Poly Network, a platform that looks to connect different blockchains so that they can work together.

Poly Network disclosed the attack Tuesday and asked to establish communication with the hackers, urging them to “return the hacked assets.”

[…]

In a strange turn of events Wednesday, the hackers began returning some of the funds they stole.

They sent a message to Poly Network embedded in a cryptocurrency transaction saying they were “ready to return” the funds. The DeFi platform responded requesting the money be sent to three crypto addresses.

As of 7 a.m. London time, more than $4.8 million had been returned to the Poly Network addresses. By 11 a.m. ET, about $258 million had been sent back.

[…]

Source: Cryptocurrency theft: Hackers steal $600 million in Poly Network hack

Apple App Store, Google Play Store Targeted by Open App Markets Act

The Open App Markets Act, which is being spearheaded by Sens. Richard Blumenthal, and Marsha Blackburn, is designed to crack down on some of the scummiest tactics tech players use to rule their respective app ecosystems, while giving users the power to download the apps they want, from the app stores they want, without retaliation.

“For years, Apple and Google have squashed competitors and kept consumers in the dark—pocketing hefty windfalls while acting as supposedly benevolent gatekeepers of this multibillion-dollar market,” Blumenthal told the Wall Street Journal. As he put it, this bill is tailor-made to “break these tech giants’ ironclad grip open the app economy to new competitors and give mobile users more control over their own devices.”

The antitrust issues facing both of these companies—along with fellow tech giants like Facebook and Amazon—have come to a boiling point on Capitol Hill over the past year. We’ve seen lawmakers roll out bill after bill meant to target some of the most lucrative monopolies these companies hold: Amazon’s marketplace, Facebook’s collection of platforms, and, of course, Apple and Google’s respective app stores. Last month, three dozen state attorneys general levied a fresh antitrust suit against Google for the Play Store fees forced on app developers. Meanwhile, Apple is still in a heated legal battle with Epic Games over its own mandated commissions, which can take up to 30% from every in-app purchase users make.

Blumenthal and Blackburn target these fees specifically. The bill would prohibit app stores from requiring that developers use their payment systems, for example. It would also prevent app stores from retaliating against developers who try to implement payment systems of their own, which is the exact scenario that got Epic booted from the App Store last summer.

On top of this, the bill would require that devices allow app sideloading by default. Google’s allowed this practice for a while, but this month started taking steps to narrow the publishing formats developers could use. Apple hardware, meanwhile, has never been sideload-friendly—a choice that’s meant to uphold the “privacy initiatives” baked into the App Store, according to Apple CEO Tim Cook.

Here are some other practices outlawed by the Open App Markets Act: Apple, Google, or any other app store owner would be barred from using a developer’s proprietary app intel to develop their own competing product. They’d also be barred from applying ranking algorithms that rank their own apps over those of their competitors. Users, meanwhile, would (finally) need to be given choices of the app store they can use on their device, instead of being pigeonholed into Apple’s App Store or Google’s Play Store.

Like all bills, this new legislation still needs to go through the regulatory churn before it has any hope of passing, and it might look like a very different set of rules by the time it finally does. But at this point, antitrust action is going to come for these companies whether they like it or not.

Source: Apple App Store, Google Play Store Targeted by Open App Markets Act

I have been talking about this since early in 2019 and it’s great to see all the action around this

Amazon Drops Policy claiming ownership of Games made by employees After Work Hours

Amazon.com Inc. withdrew a set of staff guidelines that claimed ownership rights to video games made by employees after work hours and dictated how they could distribute them, according to a company email reviewed by Bloomberg.

[…]

The old policies mandated that employees of the games division who were moonlighting on projects would need to use Amazon products, such as Amazon Web Services, and sell their games on Amazon digital stores. It also gave the company “a royalty free, worldwide, fully paid-up, perpetual, transferable license” to intellectual property rights of any games developed by its employees.

[…]

The games division has struggled practically since its inception in 2012 and can hardly afford another reputational hit. It has never released a successful game, and some current and former employees have placed the blame with Frazzini. Bloomberg reported in January that Frazzini had hired veteran game developers and executives but largely dismissed or ignored their advice.

Source: Amazon Drops ‘Draconian’ Policy on Making Games After Work Hours – Bloomberg

So tbh if they can’t make games during work hours, what difference is it that their incompentence after work hours can’t be sold outside of Amazon. Or are the employees ripping the Amazon Games division off?

China stops networked vehicle data going offshore under new infosec rules

China has drafted new rules required of its autonomous and networked vehicle builders.

Data security is front and centre in the rules, with manufacturers required to store data generated by cars – and describing their drivers – within China. Data is allowed to go offshore, but only after government scrutiny.

Manufacturers are also required to name a chief of network security, who gets the job of ensuring autonomous vehicles can’t fall victim to cyber attacks. Made-in-China auto-autos are also required to be monitored to detect security issues.

Over-the-air upgrades are another requirement, with vehicle owners to be offered verbose information about the purpose of software updates, the time required to install them, and the status of upgrades.

Behind the wheel, drivers must be informed about the vehicle’s capabilities and the responsibilities that rest on their human shoulders. All autonomous vehicles will be required to detect when a driver’s hands leave the wheel, and to detect when it’s best to cede control to a human.

If an autonomous vehicle’s guidance systems fail, it must be able to hand back control.

[…]

Source: China stops networked vehicle data going offshore under new infosec rules • The Register

And again China is doing what the EU and US should be doing to a certain extent.

Have you made sure you have changed these Google Pay privacy settings?

Google Pay is an online paying system and digital wallet that makes it easy to buy anything on your mobile device or with your mobile device. But if you’re concerned about what Google is doing with all your data (which you probably should be), Google doesn’t make it easy for Google Pay has some secret settings to manage your settings.

 

A report from Bleeping Computer shows that privacy settings aren’t available through the main Google Pay setting page that is accessible through the navigation sidebar.

The URL for that settings page is:

https://pay.google.com/payments/u/0/home#settings

 

On that page, users can change general settings like address and payment users.

But if users want to change privacy settings, they have to go to a separate page:

https://pay.google.com/payments/u/0/home?page=privacySettings#privacySettings

 

On that screen, users can adjust all the same settings available on the other settings page, but they can also address three additional privacy settings—controlling whether Google Pay is allowed to share account information, personal information, and creditworthiness.

Here’s the full language of those three options:

-Allow Google Payment Corporation to share third party creditworthiness information about you with other companies owned and controlled by Google LLC for their everyday business purposes.

-Allow your personal information to be used by other companies owned and controlled by Google LLC to market to you. Opting out here does not impact whether other companies owned and controlled by Google LLC can market to you based on information you provide to them outside of Google Payment Corporation.

-Allow Google LLC or its affiliates to inform a third party merchant, whose site or app you visit, whether you have a Google Payments account that can be used for payment to that merchant. Opting out may impact your ability to use Google Payments to transact with certain third party merchants.

 

According to Bleeping Computer, the default of Google Pay is to enable all the above settings. In order to opt out, users have to go to the special URL that is not accessible through the navigation bar.

As the Reddit post that inspired the Bleeping Computer report claims, this discrepancy makes it appear that Google Pay is hiding its privacy options. “Google is not walking the talk when it claims to make it easy for their users to control the privacy and use of their own data,” the Redditor surmised.

A Google spokesperson told Gizmodo they’re working to make the privacy settings more accessible. “The different settings views described here are an issue resulting from a previous software update and we are working to fix this right away so that these privacy settings are always visible on pay.google.com,” the spokesperson told Gizmodo.

“All users are currently able to access these privacy settings via the ‘Google Payments privacy settings page’ link in the Google Pay privacy notice.”

In the meantime, here’s that link again for the privacy settings. Go ahead and uncheck those three boxes, if you feel so inclined.

Source: How To Find Google Pay’s Hidden Privacy Settings

Here’s hoping that my bank can set up it’s own version of Google Pay instead of integrating with it. I definitely don’t want Google or Apple getting their grubby little paws on my financial data.

create virtual cards to pay with online with Privacy

Protect your card details and your money by creating virtual cards at each place you spend online, or for each purchase

Create single-use cards that close themselves automatically

browser extension to create and auto-fill card numbers at checkout

Privacy Cards put the control in your hands when you make a purchase online. Business or personal, one-time or subscription, now you decide who can charge your card, how much, how often, and you can close a card any time

Source: Privacy – Smarter Payments

Post-implementation review of the repeal of section 52 of the CDPA 1988 and associated amendments – Call for views – GOV.UK

The Copyright, Designs and Patents Act 1988 (CDPA) sets the term of protection for works protected copyright. For artistic works, the term of protection is life of the author plus 70 years. For more information on the term of copyright, see our Copyright Notice: Duration of copyright (term) on this subject. Section 52 CDPA previously reduced the term of copyright for industrially manufactured artistic works to 25 years.

In 2011, a judgment was made by the Court of Justice of the European Union (CJEU) in relation to copyright for design works. The government concluded that section 52 CDPA should be repealed to provide equal protection for all types of artistic work. This repeal was included in the Enterprise and Regulatory Reform Act 2013. The main copyright works affected were works of artistic craftsmanship. The primary types of work believed to be in scope were furniture, jewellery, ceramics, lighting and other homewares. This would be both the 3D manufacture and retail and the 2D representation in publishing.

[…]

The Copyright (Amendment) Regulations 2016 came into force on 6 April 2017. They amended Schedule 1 CDPA to allow works made before 1957 to attract copyright protection, whatever their separate design status. They also removed a compulsory licensing provision for works with revived copyright from the Duration of Copyright and Rights in Performances Regulations 1995 (1995 Regulations). Existing compulsory licences which had agreed a royalty or remuneration with the rights holder could continue. The relevant documents can be found in the Changes to Schedule 1 CDPA and duration of Copyright Regulations consultation.

[…]

Source: Post-implementation review of the repeal of section 52 of the CDPA 1988 and associated amendments – Call for views – GOV.UK

So if you are interested in copyright in the UK, make sure you fill in the questions at the bottom of the link and email them!

AI algorithms uncannily good at spotting your race from medical scans

Neural networks can correctly guess a person’s race just by looking at their bodily x-rays and researchers have no idea how it can tell.

There are biological features that can give clues to a person’s ethnicity, like the colour of their eyes or skin. But beneath all that, it’s difficult for humans to tell. That’s not the case for AI algorithms, according to a study that’s not yet been peer reviewed.

A team of researchers trained five different models on x-rays of different parts of the body, including chest and hands and then labelled each image according to the patient’s race. The machine learning systems were then tested on how well they could predict someone’s race given just their medical scans.

They were surprisingly accurate. The worst performing was able to predict the right answer 80 per cent of the time, and the best was able to do this 99 per cent, according to the paper.

“We demonstrate that medical AI systems can easily learn to recognise racial identity in medical images, and that this capability is extremely difficult to isolate or mitigate,” the team warns [PDF].

“We strongly recommend that all developers, regulators, and users who are involved with medical image analysis consider the use of deep learning models with extreme caution. In the setting of x-ray and CT imaging data, patient racial identity is readily learnable from the image data alone, generalises to new settings, and may provide a direct mechanism to perpetuate or even worsen the racial disparities that exist in current medical practice.”

Source: AI algorithms uncannily good at spotting your race from medical scans, boffins warn • The Register

Chinese scientists develop world’s strongest glass that’s harder than diamond

Scientists in China have developed the hardest and strongest glassy material known so far that can scratch diamond crystals with ease.

The researchers, including those from Yanshan University in China, noted that the new material – tentatively named AM-III – has “outstanding” mechanical and electronic properties, and could find applications in solar cells due to its “ultra-high” strength and wear resistance.

Analysis of the material, published in the journal National Science Review, revealed that its hardness reached 113 gigapascals (GPa) while natural diamond stone usually scores 50 to 70 on the same test.

[…]

Using fullerenes, which are materials made of hollow football-like arrangements of carbon atoms, the researchers produced different types of glassy materials with varying molecular organisation among which AM-III had the highest order of atoms and molecules.

To achieve this order of molecules, the scientists crushed and blended the fullerenes together, gradually applying intense heat and pressure of about 25 GPa and 1,200 degrees Celsius in an experimental chamber for about 12 hours, spending an equal amount of time cooling the material.

[…]

 

Source: Chinese scientists develop world’s strongest glass that’s as hard as diamond | The Independent

Ancestry.com Gave Itself the Rights to Your Family Photos

The Blackstone-owned genealogy giant Ancestry.com raised a ton of red flags earlier this month with an update to its terms and conditions that give the company a bit more power over your family photos. From here on out, the August 3 update reads, Ancestry can use these pics for any reason, at any time, forever.

[…]

By submitting User Provided Content through any of the Services, you grant Ancestry a perpetual, sublicensable, worldwide, non-revocable, royalty-free license to host, store, copy, publish, distribute, provide access to, create derivative works of, and otherwise use such User Provided Content to the extent and in the form or context we deem appropriate on or through any media or medium and with any technology or devices now known or hereafter developed or discovered. This includes the right for Ancestry to copy, display, and index your User Provided Content. Ancestry will own the indexes it creates.

[…]

The company also noted that it added a helpful clause to clarify that, yes, deleting your documents from Ancestry’s site would also remove any rights Ancestry holds over them. But there’s a catch: if any other Ancestry users copied or saved your content, then Ancestry still holds those rights until these other users delete your documents, too.

[…]

Source: Ancestry.com Gave Itself the Rights to Your Family Photos

Cross-Chain DeFi Site Poly Network Hacked; Hundreds of Millions Potentially Lost

Cross-chain decentralized finance (DeFi) platform Poly Network was attacked on Tuesday, with the alleged hacker draining roughly $600 million in crypto.

Poly Network, a protocol launched by the founder of Chinese blockchain project Neo, operates on the Binance Smart Chain, Ethereum and Polygon blockchains. Tuesday’s attack struck each chain consecutively, with the Poly team identifying three addresses where stolen assets were transferred.

At the time that Poly tweeted news of the attack, the three addresses collectively held more than $600 million in different cryptocurrencies, including USDC, wrapped bitcoin (WBTC, -1.45%), wrapped ether (ETH, -0.7%) and shiba inu (SHIB), blockchain scanning platforms show.

[…]

About one hour after Poly announced the hack on Twitter, the hacker tried to move assets including USDT through the Ethereum address into liquidity pool Curve.fi, records show. The transaction was rejected.

Meanwhile, close to $100 million has been moved out of the Binance Smart Chain address in the past 30 minutes and deposited into liquidity pool Ellipsis Finance.

[…]

BlockSec, a China-based blockchain security firm, said in an initial attack analysis report that the hack may be triggered by the leak of a private key that was used to sign the cross-chain message.

But it also added that another possible reason is a potential bug during Poly’s singing process that may have been “abused” to sign the message.

According to another China-based blockchain security firm, Slowmist, the attackers’ original funds were in monero (XMR, -2.9%), a privacy-centric cryptocurrency, and were then exchanged for BNB, ETH, MATIC (+0.86%) and a few other tokens.

The attackers then initiated the attacks on Ethereum, BSC and Polygon blockchains. The finding was supported by Slowmist’s partners, including China-based exchange Hoo.

“Based on the flows of the funds and multiple fingerprint information, it is likely a long-planned, organized, and well-prepared attack,” Slowmist wrote.

[…]

The Poly Network incident shows how nascent cross-chain protocols are particularly vulnerable to attacks. In July, cross-chain liquidity protocol Thorchain suffered two exploits in two weeks. Rari Capital, another cross-chain DeFi protocol, was hit by an attack in May, losing funds worth nearly $11 million in ETH.

[…]

Source: Cross-Chain DeFi Site Poly Network Hacked; Hundreds of Millions Potentially Lost – CoinDesk

Oppo’s latest under-screen camera may finally be capable of good photos – I hate the notch!

Until recently, there was only one smartphone on the market equipped with an under-screen camera: last year’s ZTE Axon 20 5G. Other players such as Vivo, Oppo and Xiaomi had also been testing this futuristic tech, but given the subpar image quality back then, it’s no wonder that phone makers largely stuck with punch-hole cameras for selfies.

Despite much criticism of its first under-screen camera, ZTE worked what it claims to be an improved version into its new Axon 30 5G, which launched in China last week. Coincidentally, today Oppo unveiled its third-gen under-screen camera which, based on a sample shot it provided, appears to be surprisingly promising — no noticeable haziness nor glare. But that was just one photo, of course, so I’ll obviously reserve my final judgement until I get to play with one. Even so, the AI tricks and display circuitry that made this possible are intriguing.

Oppo's next-gen under-screen camera
Oppo

In a nutshell, nothing has changed in terms of how the under-screen camera sees through the screen. Its performance is limited by how much light can travel through the gaps between each OLED pixel. Therefore, AI compensation is still a must. For its latest under-screen camera, Oppo says it trained its own AI engine “using tens of thousands of photos” in order to achieve more accurate corrections on diffraction, white balance and HDR. Hence the surprisingly natural-looking sample shot.

Oppo's next-gen under-screen camera
Oppo

Another noteworthy improvement here lies within the display panel’s consistency. The earlier designs chose to lower the pixel density in the area above the camera, in order to let sufficient light into the sensor. This resulted in a noticeable patch above the camera, which would have been a major turn-off when you watched videos or read fine text on that screen.

But now, Oppo — or the display panel maker, which could be Samsung — figured out a way to boost light transmittance by slightly shrinking each pixel’s geometry above the camera. In order words, we get to keep the same 400-ppi pixel density as the rest of the screen, thus creating a more consistent look.

Oppo added that this is further enhanced by a transparent wiring material, as well as a one-to-one pixel-circuit-to-pixel architecture (instead of two-to-one like before) in the screen area above the camera. The latter promises more precise image control and greater sharpness, with the bonus being a 50-percent longer panel lifespan due to better burn-in prevention.

Oppo didn’t say when or if consumers will get to use its next-gen under-screen camera, but given the timing, I wouldn’t be surprised if this turns out to be the same solution on the ZTE Axon 30 5G. In any case, it would be nice if the industry eventually agreed to dump punch-hole cameras in favor of invisible ones.

Source: Oppo’s latest under-screen camera may finally be capable of good photos | Engadget

WhatsApp head says Apple’s child safety update is a ‘surveillance system’

One day after Apple confirmed plans for new software that will allow it to detect images of child abuse on users’ iCloud photos, Facebook’s head of WhatsApp says he is “concerned” by the plans.

In a thread on Twitter, Will Cathcart called it an “Apple built and operated surveillance system that could very easily be used to scan private content for anything they or a government decides it wants to control.” He also raised questions about how such a system may be exploited in China or other countries, or abused by spyware companies.

[…]

Source: WhatsApp head says Apple’s child safety update is a ‘surveillance system’ | Engadget

Pots and kettles – but he’s right though. This is a very serious lapse of privacy for Apple

Hundreds of AI tools have been built to catch covid. None of them helped.

[…]

The AI community, in particular, rushed to develop software that many believed would allow hospitals to diagnose or triage patients faster, bringing much-needed support to the front lines—in theory.

In the end, many hundreds of predictive tools were developed. None of them made a real difference, and some were potentially harmful.

That’s the damning conclusion of multiple studies published in the last few months. In June, the Turing Institute, the UK’s national center for data science and AI, put out a report summing up discussions at a series of workshops it held in late 2020. The clear consensus was that AI tools had made little, if any, impact in the fight against covid.

Not fit for clinical use

This echoes the results of two major studies that assessed hundreds of predictive tools developed last year. Wynants is lead author of one of them, a review in the British Medical Journal that is still being updated as new tools are released and existing ones tested. She and her colleagues have looked at 232 algorithms for diagnosing patients or predicting how sick those with the disease might get. They found that none of them were fit for clinical use. Just two have been singled out as being promising enough for future testing.

[…]

Wynants’s study is backed up by another large review carried out by Derek Driggs, a machine-learning researcher at the University of Cambridge, and his colleagues, and published in Nature Machine Intelligence. This team zoomed in on deep-learning models for diagnosing covid and predicting patient risk from medical images, such as chest x-rays and chest computer tomography (CT) scans. They looked at 415 published tools and, like Wynants and her colleagues, concluded that none were fit for clinical use.

[…]

Both teams found that researchers repeated the same basic errors in the way they trained or tested their tools. Incorrect assumptions about the data often meant that the trained models did not work as claimed.

[…]

What went wrong

Many of the problems that were uncovered are linked to the poor quality of the data that researchers used to develop their tools. Information about covid patients, including medical scans, was collected and shared in the middle of a global pandemic, often by the doctors struggling to treat those patients. Researchers wanted to help quickly, and these were the only public data sets available. But this meant that many tools were built using mislabeled data or data from unknown sources.

Driggs highlights the problem of what he calls Frankenstein data sets, which are spliced together from multiple sources and can contain duplicates. This means that some tools end up being tested on the same data they were trained on, making them appear more accurate than they are.

It also muddies the origin of certain data sets. This can mean that researchers miss important features that skew the training of their models. Many unwittingly used a data set that contained chest scans of children who did not have covid as their examples of what non-covid cases looked like. But as a result, the AIs learned to identify kids, not covid.

Driggs’s group trained its own model using a data set that contained a mix of scans taken when patients were lying down and standing up. Because patients scanned while lying down were more likely to be seriously ill, the AI learned wrongly to predict serious covid risk from a person’s position.

In yet other cases, some AIs were found to be picking up on the text font that certain hospitals used to label the scans. As a result, fonts from hospitals with more serious caseloads became predictors of covid risk.

Errors like these seem obvious in hindsight. They can also be fixed by adjusting the models, if researchers are aware of them. It is possible to acknowledge the shortcomings and release a less accurate, but less misleading model. But many tools were developed either by AI researchers who lacked the medical expertise to spot flaws in the data or by medical researchers who lacked the mathematical skills to compensate for those flaws.

A more subtle problem Driggs highlights is incorporation bias, or bias introduced at the point a data set is labeled. For example, many medical scans were labeled according to whether the radiologists who created them said they showed covid. But that embeds, or incorporates, any biases of that particular doctor into the ground truth of a data set. It would be much better to label a medical scan with the result of a PCR test rather than one doctor’s opinion, says Driggs. But there isn’t always time for statistical niceties in busy hospitals.

[…]

Hospitals will sometimes say that they are using a tool only for research purposes, which makes it hard to assess how much doctors are relying on them. “There’s a lot of secrecy,” she says.

[…]

some hospitals are even signing nondisclosure agreements with medical AI vendors. When she asked doctors what algorithms or software they were using, they sometimes told her they weren’t allowed to say.

How to fix it

What’s the fix? Better data would help, but in times of crisis that’s a big ask. It’s more important to make the most of the data sets we have. The simplest move would be for AI teams to collaborate more with clinicians, says Driggs. Researchers also need to share their models and disclose how they were trained so that others can test them and build on them. “Those are two things we could do today,” he says. “And they would solve maybe 50% of the issues that we identified.”

Getting hold of data would also be easier if formats were standardized, says Bilal Mateen, a doctor who leads the clinical technology team at the Wellcome Trust, a global health research charity based in London.

Another problem Wynants, Driggs, and Mateen all identify is that most researchers rushed to develop their own models, rather than working together or improving existing ones. The result was that the collective effort of researchers around the world produced hundreds of mediocre tools, rather than a handful of properly trained and tested ones.

“The models are so similar—they almost all use the same techniques with minor tweaks, the same inputs—and they all make the same mistakes,” says Wynants. “If all these people making new models instead tested models that were already available, maybe we’d have something that could really help in the clinic by now.”

In a sense, this is an old problem with research. Academic researchers have few career incentives to share work or validate existing results. There’s no reward for pushing through the last mile that takes tech from “lab bench to bedside,” says Mateen.

To address this issue, the World Health Organization is considering an emergency data-sharing contract that would kick in during international health crises.

[…]

Source: Hundreds of AI tools have been built to catch covid. None of them helped. | MIT Technology Review

Pfizer Hikes Price of Covid-19 Vaccine by 25% in Europe

Pfizer is raising the price of its covid-19 vaccine in Europe by over 25% under a newly negotiated contract with the European Union, according to a report from the Financial Times. Competitor Moderna is also hiking the price of its vaccine in Europe by roughly 10%.

Pfizer’s covid-19 vaccine is already expected to generate the most revenue of any drug in a single year—about $33.5 billion for 2021 alone, according to the pharmaceutical company’s own estimates. But the company says it’s providing poorer countries the vaccine at a highly discounted price.

Pfizer previously charged the European Union €15.50 per dose for its vaccine ($18.40), which is based on new mRNA technology. The company will now charge €19.50 ($23.15) for 2.1 billion doses that will be delivered through the year 2023, according to the Financial Times.

Moderna previously charged the EU $22.60 per dose but will now get $25.50 per dose. That new price is actually lower than first anticipated, according to the Financial Times, because the EU adjusted its initial order to get more doses.

[…]

While most drug companies like Pfizer and Moderna are selling their covid-19 vaccines at a profit—even China’s Sinovac vaccine is being sold to make money— the UK’s AstraZeneca vaccine is being sold at cost. But AstraZeneca has suffered from poor press after a few dozen people around the world died from blood clots believed to be related to the British vaccine. As it turns out, Pfizer’s blood clot risk is “similar” to AstraZeneca according to a new study and your risk from dying of covid-19 is much higher than dying from any vaccine.

[…]

“The Pfizer-BioNTech covid-19 vaccine contributed $7.8 billion in global revenues during the second quarter, and we continue to sign agreements with governments around the world,” Pfizer CEO Albert Bourla said last week.

But Bourla was careful to note that Pfizer is providing the vaccine at discounted rates for poorer countries.

“We anticipate that a significant amount of our remaining 2021 vaccine manufacturing capacity will be delivered to middle- and low-income countries where we price in line with income levels or at a not-for-profit price,” Bourla said.

“In fact, we are on track to deliver on our commitment to provide this year more than one billion doses, or approximately 40% of our total production, to middle- and low-income countries, and another one billion in 2022,” Boula continued.

Source: Pfizer Hikes Price of Covid-19 Vaccine by 25% in Europe

Incredible that this amount of profit can be generated through need. These vaccines should have been taken up and mass produced in India or wherever and thrown around the entire world for the safety of all the people living in it.

Hackers leak full EA data after failed extortion attempt

The hackers who breached Electronic Arts last month have released the entire cache of stolen data after failing to extort the company and later sell the stolen files to a third-party buyer.

The data, dumped on an underground cybercrime forum on Monday, July 26, is now being widely distributed on torrent sites.

According to a copy of the dump obtained by The Record, the leaked files contain the source code of the FIFA 21 soccer game, including tools to support the company’s server-side services.

[…]

 

Source: Hackers leak full EA data after failed extortion attempt – The Record by Recorded Future