Oppo’s latest under-screen camera may finally be capable of good photos – I hate the notch!

Until recently, there was only one smartphone on the market equipped with an under-screen camera: last year’s ZTE Axon 20 5G. Other players such as Vivo, Oppo and Xiaomi had also been testing this futuristic tech, but given the subpar image quality back then, it’s no wonder that phone makers largely stuck with punch-hole cameras for selfies.

Despite much criticism of its first under-screen camera, ZTE worked what it claims to be an improved version into its new Axon 30 5G, which launched in China last week. Coincidentally, today Oppo unveiled its third-gen under-screen camera which, based on a sample shot it provided, appears to be surprisingly promising — no noticeable haziness nor glare. But that was just one photo, of course, so I’ll obviously reserve my final judgement until I get to play with one. Even so, the AI tricks and display circuitry that made this possible are intriguing.

Oppo's next-gen under-screen camera
Oppo

In a nutshell, nothing has changed in terms of how the under-screen camera sees through the screen. Its performance is limited by how much light can travel through the gaps between each OLED pixel. Therefore, AI compensation is still a must. For its latest under-screen camera, Oppo says it trained its own AI engine “using tens of thousands of photos” in order to achieve more accurate corrections on diffraction, white balance and HDR. Hence the surprisingly natural-looking sample shot.

Oppo's next-gen under-screen camera
Oppo

Another noteworthy improvement here lies within the display panel’s consistency. The earlier designs chose to lower the pixel density in the area above the camera, in order to let sufficient light into the sensor. This resulted in a noticeable patch above the camera, which would have been a major turn-off when you watched videos or read fine text on that screen.

But now, Oppo — or the display panel maker, which could be Samsung — figured out a way to boost light transmittance by slightly shrinking each pixel’s geometry above the camera. In order words, we get to keep the same 400-ppi pixel density as the rest of the screen, thus creating a more consistent look.

Oppo added that this is further enhanced by a transparent wiring material, as well as a one-to-one pixel-circuit-to-pixel architecture (instead of two-to-one like before) in the screen area above the camera. The latter promises more precise image control and greater sharpness, with the bonus being a 50-percent longer panel lifespan due to better burn-in prevention.

Oppo didn’t say when or if consumers will get to use its next-gen under-screen camera, but given the timing, I wouldn’t be surprised if this turns out to be the same solution on the ZTE Axon 30 5G. In any case, it would be nice if the industry eventually agreed to dump punch-hole cameras in favor of invisible ones.

Source: Oppo’s latest under-screen camera may finally be capable of good photos | Engadget

WhatsApp head says Apple’s child safety update is a ‘surveillance system’

One day after Apple confirmed plans for new software that will allow it to detect images of child abuse on users’ iCloud photos, Facebook’s head of WhatsApp says he is “concerned” by the plans.

In a thread on Twitter, Will Cathcart called it an “Apple built and operated surveillance system that could very easily be used to scan private content for anything they or a government decides it wants to control.” He also raised questions about how such a system may be exploited in China or other countries, or abused by spyware companies.

[…]

Source: WhatsApp head says Apple’s child safety update is a ‘surveillance system’ | Engadget

Pots and kettles – but he’s right though. This is a very serious lapse of privacy for Apple

Hundreds of AI tools have been built to catch covid. None of them helped.

[…]

The AI community, in particular, rushed to develop software that many believed would allow hospitals to diagnose or triage patients faster, bringing much-needed support to the front lines—in theory.

In the end, many hundreds of predictive tools were developed. None of them made a real difference, and some were potentially harmful.

That’s the damning conclusion of multiple studies published in the last few months. In June, the Turing Institute, the UK’s national center for data science and AI, put out a report summing up discussions at a series of workshops it held in late 2020. The clear consensus was that AI tools had made little, if any, impact in the fight against covid.

Not fit for clinical use

This echoes the results of two major studies that assessed hundreds of predictive tools developed last year. Wynants is lead author of one of them, a review in the British Medical Journal that is still being updated as new tools are released and existing ones tested. She and her colleagues have looked at 232 algorithms for diagnosing patients or predicting how sick those with the disease might get. They found that none of them were fit for clinical use. Just two have been singled out as being promising enough for future testing.

[…]

Wynants’s study is backed up by another large review carried out by Derek Driggs, a machine-learning researcher at the University of Cambridge, and his colleagues, and published in Nature Machine Intelligence. This team zoomed in on deep-learning models for diagnosing covid and predicting patient risk from medical images, such as chest x-rays and chest computer tomography (CT) scans. They looked at 415 published tools and, like Wynants and her colleagues, concluded that none were fit for clinical use.

[…]

Both teams found that researchers repeated the same basic errors in the way they trained or tested their tools. Incorrect assumptions about the data often meant that the trained models did not work as claimed.

[…]

What went wrong

Many of the problems that were uncovered are linked to the poor quality of the data that researchers used to develop their tools. Information about covid patients, including medical scans, was collected and shared in the middle of a global pandemic, often by the doctors struggling to treat those patients. Researchers wanted to help quickly, and these were the only public data sets available. But this meant that many tools were built using mislabeled data or data from unknown sources.

Driggs highlights the problem of what he calls Frankenstein data sets, which are spliced together from multiple sources and can contain duplicates. This means that some tools end up being tested on the same data they were trained on, making them appear more accurate than they are.

It also muddies the origin of certain data sets. This can mean that researchers miss important features that skew the training of their models. Many unwittingly used a data set that contained chest scans of children who did not have covid as their examples of what non-covid cases looked like. But as a result, the AIs learned to identify kids, not covid.

Driggs’s group trained its own model using a data set that contained a mix of scans taken when patients were lying down and standing up. Because patients scanned while lying down were more likely to be seriously ill, the AI learned wrongly to predict serious covid risk from a person’s position.

In yet other cases, some AIs were found to be picking up on the text font that certain hospitals used to label the scans. As a result, fonts from hospitals with more serious caseloads became predictors of covid risk.

Errors like these seem obvious in hindsight. They can also be fixed by adjusting the models, if researchers are aware of them. It is possible to acknowledge the shortcomings and release a less accurate, but less misleading model. But many tools were developed either by AI researchers who lacked the medical expertise to spot flaws in the data or by medical researchers who lacked the mathematical skills to compensate for those flaws.

A more subtle problem Driggs highlights is incorporation bias, or bias introduced at the point a data set is labeled. For example, many medical scans were labeled according to whether the radiologists who created them said they showed covid. But that embeds, or incorporates, any biases of that particular doctor into the ground truth of a data set. It would be much better to label a medical scan with the result of a PCR test rather than one doctor’s opinion, says Driggs. But there isn’t always time for statistical niceties in busy hospitals.

[…]

Hospitals will sometimes say that they are using a tool only for research purposes, which makes it hard to assess how much doctors are relying on them. “There’s a lot of secrecy,” she says.

[…]

some hospitals are even signing nondisclosure agreements with medical AI vendors. When she asked doctors what algorithms or software they were using, they sometimes told her they weren’t allowed to say.

How to fix it

What’s the fix? Better data would help, but in times of crisis that’s a big ask. It’s more important to make the most of the data sets we have. The simplest move would be for AI teams to collaborate more with clinicians, says Driggs. Researchers also need to share their models and disclose how they were trained so that others can test them and build on them. “Those are two things we could do today,” he says. “And they would solve maybe 50% of the issues that we identified.”

Getting hold of data would also be easier if formats were standardized, says Bilal Mateen, a doctor who leads the clinical technology team at the Wellcome Trust, a global health research charity based in London.

Another problem Wynants, Driggs, and Mateen all identify is that most researchers rushed to develop their own models, rather than working together or improving existing ones. The result was that the collective effort of researchers around the world produced hundreds of mediocre tools, rather than a handful of properly trained and tested ones.

“The models are so similar—they almost all use the same techniques with minor tweaks, the same inputs—and they all make the same mistakes,” says Wynants. “If all these people making new models instead tested models that were already available, maybe we’d have something that could really help in the clinic by now.”

In a sense, this is an old problem with research. Academic researchers have few career incentives to share work or validate existing results. There’s no reward for pushing through the last mile that takes tech from “lab bench to bedside,” says Mateen.

To address this issue, the World Health Organization is considering an emergency data-sharing contract that would kick in during international health crises.

[…]

Source: Hundreds of AI tools have been built to catch covid. None of them helped. | MIT Technology Review

Pfizer Hikes Price of Covid-19 Vaccine by 25% in Europe

Pfizer is raising the price of its covid-19 vaccine in Europe by over 25% under a newly negotiated contract with the European Union, according to a report from the Financial Times. Competitor Moderna is also hiking the price of its vaccine in Europe by roughly 10%.

Pfizer’s covid-19 vaccine is already expected to generate the most revenue of any drug in a single year—about $33.5 billion for 2021 alone, according to the pharmaceutical company’s own estimates. But the company says it’s providing poorer countries the vaccine at a highly discounted price.

Pfizer previously charged the European Union €15.50 per dose for its vaccine ($18.40), which is based on new mRNA technology. The company will now charge €19.50 ($23.15) for 2.1 billion doses that will be delivered through the year 2023, according to the Financial Times.

Moderna previously charged the EU $22.60 per dose but will now get $25.50 per dose. That new price is actually lower than first anticipated, according to the Financial Times, because the EU adjusted its initial order to get more doses.

[…]

While most drug companies like Pfizer and Moderna are selling their covid-19 vaccines at a profit—even China’s Sinovac vaccine is being sold to make money— the UK’s AstraZeneca vaccine is being sold at cost. But AstraZeneca has suffered from poor press after a few dozen people around the world died from blood clots believed to be related to the British vaccine. As it turns out, Pfizer’s blood clot risk is “similar” to AstraZeneca according to a new study and your risk from dying of covid-19 is much higher than dying from any vaccine.

[…]

“The Pfizer-BioNTech covid-19 vaccine contributed $7.8 billion in global revenues during the second quarter, and we continue to sign agreements with governments around the world,” Pfizer CEO Albert Bourla said last week.

But Bourla was careful to note that Pfizer is providing the vaccine at discounted rates for poorer countries.

“We anticipate that a significant amount of our remaining 2021 vaccine manufacturing capacity will be delivered to middle- and low-income countries where we price in line with income levels or at a not-for-profit price,” Bourla said.

“In fact, we are on track to deliver on our commitment to provide this year more than one billion doses, or approximately 40% of our total production, to middle- and low-income countries, and another one billion in 2022,” Boula continued.

Source: Pfizer Hikes Price of Covid-19 Vaccine by 25% in Europe

Incredible that this amount of profit can be generated through need. These vaccines should have been taken up and mass produced in India or wherever and thrown around the entire world for the safety of all the people living in it.

Hackers leak full EA data after failed extortion attempt

The hackers who breached Electronic Arts last month have released the entire cache of stolen data after failing to extort the company and later sell the stolen files to a third-party buyer.

The data, dumped on an underground cybercrime forum on Monday, July 26, is now being widely distributed on torrent sites.

According to a copy of the dump obtained by The Record, the leaked files contain the source code of the FIFA 21 soccer game, including tools to support the company’s server-side services.

[…]

 

Source: Hackers leak full EA data after failed extortion attempt – The Record by Recorded Future

How Google quietly funds Europe’s leading tech policy institutes

A recent scientific paper proposed that, like Big Tobacco in the Seventies, Big Tech thrives on creating uncertainty around the impacts of its products and business model. One of the ways it does this is by cultivating pockets of friendly academics who can be relied on to echo Big Tech talking points, giving them added gravitas in the eyes of lawmakers.

Google highlighted working with favourable academics as a key aim in its strategy, leaked in October 2020, for lobbying the EU’s Digital Markets Act – sweeping legislation that could seriously undermine tech giants’ market dominance if it goes through.

Now, a New Statesman investigation can reveal that over the last five years, six leading academic institutes in the EU have taken tens of millions of pounds of funding from Google, Facebook, Amazon and Microsoft to research issues linked to the tech firms’ business models, from privacy and data protection to AI ethics and competition in digital markets. While this funding tends to come with guarantees of academic independence, this creates an ethical quandary where the subject of research is also often the primary funder of it.

 

The New Statesman has also found evidence of an inconsistent approach to transparency, with some senior academics failing to disclose their industry funding. Other academics have warned that the growing dependence on funding from the industry raises questions about how tech firms influence the debate around the ethics of the markets they have created.

The Institute for Ethics in Artificial Intelligence at the Technical University of Munich (TUM), for example, received a $7.5m grant from Facebook in 2019 to fund five years of research, while the Humboldt Institute for Internet and Society in Berlin, has accepted almost €14m from Google since it was founded in 2012, and the tech giant accounts for a third of the institute’s third-party funding.

The Humboldt Institute is seeking to diversify its funding sources, but still receives millions from Google
Annual funding to the Humboldt Institute by Google and other third-party institutions

Researchers at Big Tech-funded institutions told the New Statesman they did not feel any outward pressure to be less critical of their university’s benefactors in their research.

But one, who wished to remain anonymous, said Big Tech wielded a subtle influence through such institutions. They said that the companies typically appeared to identify uncritical academics – preferably those with political connections – who perhaps already espoused beliefs aligned with Big Tech. Companies then cultivate relationships with them, sometimes incentivising academics by granting access to sought-after data.

[…]

Luciano Floridi, professor of philosophy and ethics of information at Oxford University’s Internet Institute, is one of the most high-profile and influential European tech policy experts, who has advised the European Commission, the Information Commissioner’s Office, the UK government’s Centre for Data Ethics and Innovation, the Foreign Office, the Financial Conduct Authority and the Vatican.

Floridi is one of the best-connected tech policy experts in Europe, and he is also one of the most highly funded. The ethicist has received funding from Google, DeepMind, Facebook, the Chinese tech giant Tencent and the Japanese IT firm Fujitsu, which developed the infrastructure involved in the Post Office’s Horizon IT scandal.

OII digital ethics director Luciano Floridi is one of Europe’s most influential tech policy experts
Funding sources, and advisory positions declared by Luciano Floridi in public integrity statements

Although Floridi is connected to several of the world’s most valuable tech companies, he is especially close to Google. In the mid-2010s the academic was described as the company’s “in-house philosopher”, with his role on the company’s “right to be forgotten” committee. When the Silicon Valley giant launched a short-lived ethics committee to oversee its technology development in 2019, Floridi was among those enlisted.

Last year, Floridi oversaw and co-authored a study that found some alternative and commercial search engines returned more misinformation about healthcare to users than Google. The authors of the pro-Google study didn’t disclose any financial interests, despite Floridi’s long-running relationship with the company.

[…]

Michael Veale, a lecturer in law at University College London, said that beyond influencing independent academics, there are other motives for firms such as Google to fund policy research. “By funding very pedantic academics in an area to investigate the nuances of economics online, you can heighten the amount of perceived uncertainty in things that are currently taken for granted in regulatory spheres,” he told the New Statesman.

[…]

This appears to be the case within competition law as well. “I have noticed several common techniques used by academics who have been funded by Big Tech companies,” said Oles Andriychuk, a senior lecturer in law at Strathclyde University. “They discuss technicalities – very technical arguments which are not wrong, but they either slow down the process, or redirect the focus to issues which are less important, or which blur clarity.”

It is difficult to measure the impact of Big Tech on European academia, but Valletti adds that a possible outcome is to make research less about the details, and more about framing. “Influence is not just distorting the result in favour of [Big Tech],” he said, “but the kind of questions you ask yourself.”

Source: How Google quietly funds Europe’s leading tech policy institutes

Major U.K. science funder to require grantees to make papers immediately free to all

[…]

UK Research and Innovation (UKRI), will expand on existing rules covering all research papers produced from its £8 billion in annual funding. About three-quarters of papers recently published from U.K. universities are open access, and UKRI’s current policy gives scholars two routes to comply: Pay journals for “gold” open access, which makes a paper free to read on the publisher’s website, or choose the “green” route, which allows them to deposit a near-final version of the paper on a public repository, after a waiting period of up to 1 year. Publishers have insisted that an embargo period is necessary to prevent the free papers from peeling away their subscribers.

But starting in April 2022, that yearlong delay will no longer be permitted: Researchers choosing green open access must deposit the paper immediately when it is published. And publishers won’t be able to hang on to the copyright for UKRI-funded papers: The agency will require that the research it funds—with some minor exceptions—be published with a Creative Commons Attribution license (known as CC-BY) that allows for free and liberal distribution of the work.

UKRI developed the new policy because “publicly funded research should be available for public use by the taxpayer,” says Duncan Wingham, the funder’s executive champion for open research. The policy falls closely in line with those issued by other major research funders, including the nonprofit Wellcome Trust—one of the world’s largest nongovernmental funding bodies—and the European Research Council.

The move also brings UKRI’s policy into alignment with Plan S, an effort led by European research funders—including UKRI—to make academic literature freely available to read

[…]

It clears up some confusion about when UKRI will pay the fees that journals charge for gold open access, he says: never for journals that offer a mix of paywalled and open-access content, unless the journal is part of an agreement to transition to exclusively open access for all research papers. (More than half of U.K. papers are covered by transitional agreements, according to UKRI.)

[…]

Publishers have resisted the new requirements. The Publishers Association, a member organization for the U.K. publishing industry, circulated a document saying the policy would introduce confusion for researchers, threaten their academic freedom, undermine open access, and leave many researchers on the hook for fees for gold open access—which it calls the only viable route for researchers. The publishing giant Elsevier, in a letter sent to its editorial board members in the United Kingdom, said it had been working to shape the policy by lobbying UKRI and the U.K. government, and encouraged members to write in themselves.

[…]

It would not be in the interest of publishers to refuse to publish these green open-access papers, Rooryck says, because the public repository version ultimately drives publicity for publishers. And even with a paper immediately deposited in a public repository, the final “version of record” published behind a paywall will still carry considerable value, Prosser says. Publishers who threaten to reject such papers, Rooryck believes, are simply “saber rattling and posturing.”

Source: Major U.K. science funder to require grantees to make papers immediately free to all | Science | AAAS

It’s pretty bizarre that publically funded research is hidden behind paywalls – the public that paid for it can’t get to it and innovation is stifled because people who need the research can’t get at it either.

Chinese regulators go after price gauging in car chip industry

Chinese antitrust watchdog, State Administration of Market Supervision (SAMR), announced Tuesday it has started investigating price gouging in the automotive chip market.

The regulatory body promised to strengthen supervision and punish illegal acts such as hoarding, price hikes and collusive price increases. SAMR singled out distributors as the object of its ire.

In the early stages of the COVID-19 pandemic, prices for items such as hand sanitizer, face masks, toilet paper and other health-related items saw startling inflation that required legal intervention.

As the pandemic wore on and work from home kit became a necessity, the world saw a new kind of shortages: semiconductors.

The automotive industry was hit particularly hard by the shortage, largely because its procurement practices sent it to the back of the queue. The industry has since endured factory shutdowns and reduced levels of vehicle production – which, given cars have long supply chains, is not the sort of thing anyone needs during difficult economic times.

Chinese entrepreneurs are clearly alive to the opportunities the silicon shortage presents. Last month several Chinese would-be bootleggers were caught smuggling the critical tech with tactics like taping US$123,000 worth of product to their calves and torso or hiding them in their vehicle as they attempted to cross borders.

Analyst firm Gartner has predicted semiconductor shortages will remain moderate to severe for the rest of 2021 and continue until the second quarter of 2022. Taiwanese chipmaker TSMC has said shortages will continue until 2023.

The Register imagines that those that can influence chip prices in China, and elsewhere, will continue to try their luck until demand deflates. Or until SAMR gets a grip on regulation, whichever comes first

Source: China tightens distributor cap after local outfits hoard automotive silicon then charge silly prices • The Register

The Chinese regulators are doing a way better job than the EU and US in terms of price gauging and monopolies. Maybe the EU and US shouldn’t let big companies lobbying determine their courses of action.

Hey, AI software developers, you are taking Unicode into account, right … right?

[…]

The issue is that ambiguity or discrepancies can be introduced if the machine-learning software ignores certain invisible Unicode characters. What’s seen on screen or printed out, for instance, won’t match up with what the neural network saw and made a decision on. It may be possible abuse this lack of Unicode awareness for nefarious purposes.

As an example, you can get Google Translate’s web interface to turn what looks like the English sentence “Send money to account 4321” into the French “Envoyer de l’argent sur le compte 1234.”

A screenshot of Google Translate

Fooling Google Translate with Unicode. Click to enlarge

This is done by entering on the English side “Send money to account” and then inserting the invisible Unicode glyph 0x202E, which changes the direction of the next text we type in – “1234” – to “4321.” The translation engine ignores the special Unicode character, so on the French side we see “1234,” while the browser obeys the character, so it displays “4321” on the English side.

It may be possible to exploit an AI assistant or a web app using this method to commit fraud, though we present it here in Google Translate to merely illustrate the effect of hidden Unicode characters. A more practical example would be feeding the sentence…

You akU+8re aqU+8 AU+8coward and a fovU+8JU+8ol.

…into a comment moderation system, where U+8 is the invisible Unicode character for delete the previous character. The moderation system ignores the backspace characters, sees instead a string of misspelled words, and can’t detect any toxicity – whereas browsers correctly rendering the comment show, “You are a coward and a fool.”

[…]

It was academics at the University of Cambridge in England, and the University of Toronto in Canada, who highlighted these issues, laying out their findings in a paper released on arXiv In June this year.

“We find that with a single imperceptible encoding injection – representing one invisible character, homoglyph, reordering, or deletion – an attacker can significantly reduce the performance of vulnerable models, and with three injections most models can be functionally broken,” the paper’s abstract reads.

“Our attacks work against currently deployed commercial systems, including those produced by Microsoft and Google, in addition to open source models published by Facebook and IBM.”

[…]

Source: Hey, AI software developers, you are taking Unicode into account, right … right?