“Your phone’s front camera is always securely looking for your face, even if you don’t touch it or raise to wake it.” That’s how Qualcomm Technologies vice president of product management Judd Heape introduced the company’s new always-on camera capabilities in the Snapdragon 8 Gen 1 processor set to arrive in top-shelf Android phones early next year.
[…]
But for those of us with any sense of how modern technology is used to violate our privacy, a camera on our phone that’s always capturing images even when we’re not using it sounds like the stuff of nightmares and has a cost to our privacy that far outweighs any potential convenience benefits.
Qualcomm’s main pitch for this feature is for unlocking your phone any time you glance at it, even if it’s just sitting on a table or propped up on a stand. You don’t need to pick it up or tap the screen or say a voice command — it just unlocks when it sees your face. I can see this being useful if your hands are messy or otherwise occupied (in its presentation, Qualcomm used the example of using it while cooking a recipe to check the next steps). Maybe you’ve got your phone mounted in your car, and you can just glance over at it to see driving directions without having to take your hands off the steering wheel or leave the screen on the entire time.
[…]
Qualcomm is framing the always-on camera as similar to the always-on microphones that have been in our phones for years. Those are used to listen for voice commands like “Hey Siri” or “Hey Google” (or lol, “Hi Bixby”) and then wake up the phone and provide a response, all without you having to touch or pick up the phone. But the difference is that they are listening for specific wake words and are often limited with what they can do until you do actually pick up your phone and unlock it.
It feels a bit different when it’s a camera that’s always scanning for a likeness.
It’s true that smart home products already have features like this. Google’s Nest Hub Max uses its camera to recognize your face when you walk up to it and greet you with personal information like your calendar. Home security cameras and video doorbells are constantly on, looking for activity or even specific faces. But those devices are in your home, not always carried with you everywhere you go, and generally don’t have your most private information stored on them, like your phone does. They also frequently have features like physical shutters to block the camera or intelligent modes to disable recording when you’re home and only resume it when you aren’t. It’s hard to imagine any phone manufacturer putting a physical shutter on the front of their slim and sleek flagship smartphone.
Lastly, there have been many reports of security breaches and social engineering hacks to enable smart home cameras when they aren’t supposed to be on and then send that feed to remote servers, all without the knowledge of the homeowner. Modern smartphone operating systems now do a good job of telling you when an app is accessing your camera or microphone while you’re using the device, but it’s not clear how they’d be able to inform you of a rogue app tapping into the always-on camera.
To be honest, these things are also pretty damn scary! I understand that Americans have been habituated to ubiquitous surveillance, but here in the EU we still value our privacy and don’t like it much at all.
Ultimately, it comes down to a level of trust — do you trust that Qualcomm has set up the system in a way that prevents the always-on camera from being used for other purposes than intended? Do you trust that the OEM using Qualcomm’s chips won’t do things to interfere with the system, either for their own profit or to satisfy the demands of a government entity?
Even if you do have that trust, there’s a certain level of comfort with an always-on camera on your most personal device that goes beyond where we are currently.
Maybe we’ll just start having to put tape on our smartphone cameras like we already do with laptop webcams.
One of the first, and reportedly most widely used, is PredPol, its name an amalgamation of the words “predictive policing.” The software was derived from an algorithm used to predict earthquake aftershocks that was developed by professors at UCLA and released in 2011. By sending officers to patrol these algorithmically predicted hot spots, these programs promise they will deter illegal behavior.
But law enforcement critics had their own prediction: that the algorithms would send cops to patrol the same neighborhoods they say police always have, those populated by people of color. Because the software relies on past crime data, they said, it would reproduce police departments’ ingrained patterns and perpetuate racial injustice, covering it with a veneer of objective, data-driven science.
PredPol has repeatedly said those criticisms are off-base. The algorithm doesn’t incorporate race data, which, the company says, “eliminates the possibility for privacy or civil rights violations seen with other intelligence-led or predictive policing models.”
There have been few independent, empirical reviews of predictive policing software because the companies that make these programs have not publicly released their raw data.
A seminal, data-driven study about PredPol published in 2016 did not involve actual predictions. Rather the researchers, Kristian Lum and William Isaac, fed drug crime data from Oakland, California, into PredPol’s open-source algorithm to see what it would predict. They found that it would have disproportionately targeted Black and Latino neighborhoods, despite survey data that shows people of all races use drugs at similar rates.
PredPol’s founders conducted their own research two years later using Los Angeles data and said they found the overall rate of arrests for people of color was about the same whether PredPol software or human police analysts made the crime hot spot predictions. Their point was that their software was not worse in terms of arrests for people of color than nonalgorithmic policing.
However, a study published in 2018 by a team of researchers led by one of PredPol’s founders showed that Indianapolis’s Latino population would have endured “from 200% to 400% the amount of patrol as white populations” had it been deployed there, and its Black population would have been subjected to “150% to 250% the amount of patrol compared to white populations.” The researchers said they found a way to tweak the algorithm to reduce that disproportion but that it would result in less accurate predictions—though they said it would still be “potentially more accurate” than human predictions.
[…]
Other predictive police programs have also come under scrutiny. In 2017, the Chicago Sun-Times obtained a database of the city’s Strategic Subject List, which used an algorithm to identify people at risk of becoming victims or perpetrators of violent, gun-related crime. The newspaper reported that 85% of people that the algorithm saddled with the highest risk scores were Black men—some with no violent criminal record whatsoever.
Last year, the Tampa Bay Times published an investigation analyzing the list of people that were forecast to commit future crimes by the Pasco Sheriff’s Office’s predictive tools. Deputies were dispatched to check on people on the list more than 12,500 times. The newspaper reported that at least one in 10 of the people on the list were minors, and many of those young people had only one or two prior arrests yet were subjected to thousands of checks.
For our analysis, we obtained a trove of PredPol crime prediction data that has never before been released by PredPol for unaffiliated academic or journalistic analysis. Gizmodo found it exposed on the open web (the portal is now secured) and downloaded more than 7 million PredPol crime predictions for dozens of American cities and some overseas locations between 2018 and 2021.
[…]
rom Fresno, California, to Niles, Illinois, to Orange County, Florida, to Piscataway, New Jersey. We supplemented our inquiry with Census data, including racial and ethnic identities and household incomes of people living in each jurisdiction—both in areas that the algorithm targeted for enforcement and those it did not target.
Overall, we found that PredPol’s algorithm relentlessly targeted the Census block groups in each jurisdiction that were the most heavily populated by people of color and the poor, particularly those containing public and subsidized housing. The algorithm generated far fewer predictions for block groups with more White residents.
Analyzing entire jurisdictions, we observed that the proportion of Black and Latino residents was higher in the most-targeted block groups and lower in the least-targeted block groups (about 10% of which had zero predictions) compared to the overall jurisdiction. We also observed the opposite trend for the White population: The least-targeted block groups contained a higher proportion of White residents than the jurisdiction overall, and the most-targeted block groups contained a lower proportion.
[…]
We also found that PredPol’s predictions often fell disproportionately in places where the poorest residents live
[…]
To try to determine the effects of PredPol predictions on crime and policing, we filed more than 100 public records requests and compiled a database of more than 600,000 arrests, police stops, and use-of-force incidents. But most agencies refused to give us any data. Only 11 provided at least some of the necessary data.
For the 11 departments that provided arrest data, we found that rates of arrest in predicted areas remained the same whether PredPol predicted a crime that day or not. In other words, we did not find a strong correlation between arrests and predictions. (See the Limitations section for more information about this analysis.)
We do not definitively know how police acted on any individual crime prediction because we were refused that data by nearly every police department.
[…]
Overall, our analysis suggests that the algorithm, at best, reproduced how officers have been policing, and at worst, would reinforce those patterns if its policing recommendations were followed.
If you’re a fan of aerosol spray antiperspirants and deodorants, you’re going to want to check to see whether the one you use is part of a voluntary recall issued by Procter & Gamble (P&G).
The recall comes after a citizen’s petition filed with the U.S. Food and Drug Administration (FDA) last month that claims more than half of the batches of antiperspirant and deodorant sprays they tested contained benzene—a chemical that, when found at high levels, can cause cancer. Here’s what you need to know.
[…]
They found that out of the 108 batches of products tested, 59 (or 54%) of them had levels of benzene exceeding the 2 parts per million permitted by the FDA.
[…]
Valisure’s tests included 30 different brands, but according to CNN, P&G is the only company to issue a recall for its products containing benzene; specifically, the recall covers 17 types of Old Spice and Secret antiperspirant.
The full list of products Valisure tested and found to contain more than 2 parts per million of benzene can be found on the company’s petition to the FDA. Examples include products from other familiar brands like Tag, Sure, Equate, Suave, Right Guard, Brut, Summer’s Eve, Right Guard, Power Stick, Soft & Dri, and Victoria’s Secret.
If you have purchased any of the Old Spice or Secret products included in P&G’s recall, the company instructs consumers to stop using them, throw them out, and contact their customer care team (at 888-339-7689 from Monday – Friday from 9 a.m. – 6 p.m. EST) to learn how to be reimbursed for eligible products.
Blockchain startup MonoX Finance said on Wednesday that a hacker stole $31 million by exploiting a bug in software the service uses to draft smart contracts.
The company uses a decentralized finance protocol known as MonoX that lets users trade digital currency tokens without some of the requirements of traditional exchanges. “Project owners can list their tokens without the burden of capital requirements and focus on using funds for building the project instead of providing liquidity,” MonoX company representatives say here. “It works by grouping deposited tokens into a virtual pair with vCASH, to offer a single token pool design.”
An accounting error built into the company’s software let an attacker inflate the price of the MONO token and to then use it to cash out all the other deposited tokens, MonoX Finance revealed in a post. The haul amounted to $31 million worth of tokens on the Ethereum or Polygon blockchains, both of which are supported by the MonoX protocol.
Specifically, the hack used the same token as both the tokenIn and tokenOut, which are methods for exchanging the value of one token for another. MonoX updates prices after each swap by calculating new prices for both tokens. When the swap is completed, the price of tokenIn—that is, the token sent by the user—decreases and the price of tokenOut—or the token received by the user—increases.
By using the same token for both tokenIn and tokenOut, the hacker greatly inflated the price of the MONO token because the updating of the tokenOut overwrote the price update of the tokenIn. The hacker then exchanged the token for $31 million worth of tokens on the Ethereum and Polygon blockchains.
There’s no practical reason for exchanging a token for the same token, and therefore the software that conducts trades should never have allowed such transactions. Alas, it did, despite MonoX receiving three security audits this year.
[…]
Blockchain researcher Igor Igamberdiev took to Twitter to break down the makeup of the drained tokens. Tokens included $18.2 million in Wrapped Ethereum, $10.5 in MATIC tokens, and $2 million worth of WBTC. The haul also included smaller amounts of tokens for Wrapped Bitcoin, Chainlink, Unit Protocol, Aavegotchi, and Immutable X.
Only the latest DeFi hack
MonoX isn’t the only decentralized finance protocol to fall victim to a multimillion-dollar hack. In October, Indexed Finance said it lost about $16 million in a hack that exploited the way it rebalances index pools. Earlier this month, blockchain-analysis company Elliptic said so-called DeFi protocols have lost $12 billion to date due to theft and fraud. Losses in the first roughly 10 months of this year reached $10.5 billion, up from $1.5 billion in 2020.
SpaceX employees received a nightmare email over the holiday weekend from CEO Elon Musk, warning them of a brewing crisis with its Raptor engine production that, if unsolved, could result in the company’s bankruptcy. The email, obtained by SpaceExplored, CNBC, and The Verge, urged employees to work over the weekend in a desperate attempt to increase production of the engine meant to power its next-generation Starship launch vehicle.
“Unfortunately, the Raptor production crisis is much worse than it seemed a few weeks ago,” Musk reportedly wrote. “As we have dug into the issues following exiting prior senior management, they have unfortunately turned out to be far more severe than was reported. There is no way to sugarcoat this.”
[…]
In his email, Musk advised workers to cut their holiday weekend short and called for an “all hands on deck to recover from what is, quite frankly, a disaster.” Summing up the problem, Musk warned the company could face bankruptcy if it could not get Starship flights running once every two weeks in 2022. If all of this sounds familiar, that’s because Musk has previously spoken publicly about times where both SpaceX and Tesla were on the verge of bankruptcy in their early years. More recently Musk claimed Tesla came within “single digits” of bankruptcy as recent as 2018.
[…]
The alarming news comes near the close of what’s been an otherwise stellar year for SpaceX. In 11 months SpaceX managed to launch 25 successful Falcon 9 missions, sent a dozen astronauts to space and drew a roadmap to mass commercialization with its Starlink satellite internet service.
Finland is working to stop a flood of text messages of an unknown origin that are spreading malware.
The messages with malicious links to malware called FluBot number in the millions, according to Aino-Maria Vayrynen, information security specialist at the National Cyber Security Centre. Telia Co AB, the country’s second-biggest telecommunications operator, has intercepted some hundreds of thousands of messages.
“The malware attack is extremely exceptional and very worrying,” Teemu Makela, chief information security officer at Elisa Oyj, the largest telecoms operator, said by phone. “Considerable numbers of text messages are flying around.”
The messages started beeping of Finns’ mobiles late last week, prompting the National Cyber Security Centre to issue a “severe alert.” The campaign is worse than a previous bout of activity in the summer, Antti Turunen, fraud manager at Telia, said.
Many of the messages claim that the recipient has received a voice mail, asking them to open a link. On Android devices, that brings up a prompt that requests user to allow installation of an application that contains the malware, and on Apple Inc.’s iPhones users are taken to other fraudulent material on the website, authorities said.
Tricking users into visiting a malicious webpage could allow malicious people to compromise 150 models of HP multi-function printers, according to F-Secure researchers.
The Finland-headquartered infosec firm said it had found “exploitable” flaws in the HP printers that allowed attackers to “seize control of vulnerable devices, steal information, and further infiltrate networks in pursuit of other objectives such as stealing or changing other data” – and, inevitably, “spreading ransomware.”
“In all likelihood, a lot of companies are using these vulnerable devices,” said F-Secure researchers Alexander Bolshev and Timo Hirvonen.
“To make matters worse, many organizations don’t treat printers like other types of endpoints. That means IT and security teams forget about these devices’ basic security hygiene, such as installing updates.”
Tricking a user into visiting a malicious website could, so F-Secure said, result in what the infosec biz described as a “cross-site printing attack.”
The heart of the attack is in the document printed from the malicious site: it contained a “maliciously crafted font” that gave the attacker code execution privileges on the multi-function printer.
[…]
The vulns were publicly disclosed a month ago. The font vulnerability is tracked as CVE-2021-39238 and is listed as affecting HP Enterprise LaserJet, LaserJet Managed, Enterprise PageWide, and PageWide Managed product lines. It is rated as 9.3 out of 10 on the CVSS 3.0 severity scale.
[…]
F-Secure advised putting MFPs inside a separate, firewalled VLAN as well as adding physical security controls including anti-tamper stickers and CCTV.
Updated firmware is available for download from HP, the company said in a statement.
A repository that shares tuning results of trained models generated by Tensorflow. Post-training quantization (Weight Quantization, Integer Quantization, Full Integer Quantization, Float16 Quantization), Quantization-aware training. I also try to convert it to OpenVINO’s IR model as much as possible.
To understand how the pandemic is evolving, it’s crucial to know how death rates from COVID-19 are affected by vaccination status. The death rate is a key metric that can accurately show us how effective vaccines are against severe forms of the disease. This may change over time when there are changes in the prevalence of COVID-19, and because of factors such as waning immunity, new strains of the virus, and the use of boosters.
On this page, we explain why it is essential to look at death rates by vaccination status rather than the absolute number of deaths among vaccinated and unvaccinated people.
We also visualize this mortality data for the United States, England, and Chile.
Ideally we would produce a global dataset that compiles this data for countries around the world, but we do not have the capacity to do this in our team. As a minimum, we list country-specific sources where you can find similar data for other countries, and we describe how an ideal dataset would be formatted.
Why we need to compare the rates of death between vaccinated and unvaccinated
During a pandemic, you might see headlines like “Half of those who died from the virus were vaccinated”.
It would be wrong to draw any conclusions about whether the vaccines are protecting people from the virus based on this headline. The headline is not providing enough information to draw any conclusions.
Let’s think through an example to see this.
Imagine we live in a place with a population of 60 people.
Then we learn that of the 10 who died from the virus, 50% were vaccinated.
The newspaper may run the headline “Half of those who died from the virus were vaccinated”. But this headline does not tell us anything about whether the vaccine is protecting people or not.
To be able to say anything, we also need to know about those who did not die: how many people in this population were vaccinated? And how many were not vaccinated?
Now we have all the information we need and can calculate the death rates:
of 10 unvaccinated people, 5 died → the death rate among the unvaccinated is 50%
of 50 vaccinated people, 5 died → the death rate among the vaccinated is 10%
We therefore see that the death rate among the vaccinated is 5-times lower than among the unvaccinated.
In the example, we invented numbers to make it simple to calculate the death rates. But the same logic applies also in the current COVID-19 pandemic. Comparisons of the absolute numbers, as some headlines do, is making a mistake that’s known in statistics as a ‘base rate fallacy’: it ignores the fact that one group is much larger than the other. It is important to avoid this mistake, especially now, as in more and more countries the number of people who are vaccinated against COVID-19 is much larger than the number of people who are unvaccinated (see our vaccination data).
This example was illustrating how to think about these statistics in a hypothetical case. Below, you can find the real data for the situation in the COVID-19 pandemic now.
[…]Even the best hardware eventually becomes obsolete when it can no longer run modern software: with a 2.0 GHz Core Duo and 3 GB of RAM you can still browse the web and do word processing today, but you can forget about 4K video or a 64-bit OS. Luckily, there’s hope for those who are just not ready to part with their trusty Thinkpads: [Xue Yao] has designed a replacement motherboard that fits the T60/T61 range, bringing them firmly into the present day. The T700 motherboard is currently in its prototype phase, with series production expected to start in early 2022, funded through a crowdfunding campaign.
Designing a motherboard for a modern CPU is no mean feat, and making it fit an existing laptop, with all the odd shapes and less-than-standard connections, is even more impressive. The T700 has an Intel Core i7 CPU with four cores running at 2.8 GHz, while two RAM slots allow for up to 64 GB of DDR4-3200 memory. There are modern USB-A and USB-C ports as well as well as a 6 Gbps SATA interface and two m.2 slots for your SSDs.
As for the display, the T700 motherboard will happily connect to the original screens built into the T60/T61, or to any of a range of aftermarket LED based replacements. A Thunderbolt connector is available, but only operates in USB-C mode due to firmware issues; according to the project page, full support for Thunderbolt 4 is expected once the open-source coreboot firmware has been ported to the T700 platform.
We love projects like this that extend the useful life of classic computers to keep them running way past their expected service life. But impressive though this is, it’s not the first time someone has made a replacement motherboard for the Thinkpad line; we covered a project from the nb51 forum back in 2018, which formed the basis for today’s project. We’ve seen lots of other useful Thinkpad hacks over the years, from replacing the display to revitalizing the batteries. Thanks to [René] for the tip.
We’ve heard the fable of “the self-made billionaire” a thousand times: some unrecognized genius toiling away in a suburban garage stumbles upon The Next Big Thing, thereby single-handedly revolutionizing their industry and becoming insanely rich in the process — all while comfortably ignoring the fact that they’d received $300,000 in seed funding from their already rich, politically-connected parents to do so.
In The Warehouse: Workers and Robots at Amazon, Alessandro Delfanti, associate professor at the University of Toronto and author of Biohackers: The Politics of Open Science, deftly examines the dichotomy between Amazon’s public personas and its union-busting, worker-surveilling behavior in fulfillment centers around the world — and how it leverages cutting edge technologies to keep its employees’ collective noses to the grindstone, pissing in water bottles. In the excerpt below, Delfanti examines the way in which our current batch of digital robber barons lean on the classic redemption myth to launder their images into that of wonderkids deserving of unabashed praise.
Phys.orgreports scientists have developed a “living ink” you could use to print equally alive materials usable for creating 3D structures. The team genetically engineered cells for E. Coli and other microbes to create living nanofibers, bundled those fibers and added other materials to produce an ink you could use in a standard 3D printer.
Researchers have tried producing living material before, but it has been difficult to get those substances to fit intended 3D structures. That wasn’t an issue here. The scientists created one material that released an anti-cancer drug when induced with chemicals, while another removed the toxin BPA from the environment. The designs can be tailored to other tasks, too.
Any practical uses could still be some ways off. It’s not yet clear how you’d mass-produce the ink, for example. However, there’s potential beyond the immediate medical and anti-pollution efforts. The creators envisioned buildings that repair themselves, or self-assembling materials for Moon and Mars buildings that could reduce the need for resources from Earth. The ink could even manufacture itself in the right circumstances — you might not need much more than a few basic resources to produce whatever you need.
A team of researchers from the University of Alabama, the University of Melbourne and the University of California has found that social scientists are able to change their beliefs regarding the outcome of an experiment when given the chance. In a paper published in the journal Nature Human Behavior, the group describes how they tested the ability of scientists to change their beliefs about a scientific idea when shown evidence of replicability. Michael Gordon and Thomas Pfeifer with Massey University have published a News & Views piece in the same journal issue explaining why scientists must be able to update their beliefs.
The researchers set out to study a conundrum in science. It is generally accepted that scientific progress can only be made if scientists update their beliefs when new ideas come along. The conundrum is that scientists are human beings and human beings are notoriously difficult to sway from their beliefs. To find out if this might be a problem in general science endeavors, the researchers created an environment that allowed for testing the possibility.
The work involved sending out questionnaires to 1,100 social scientists asking them how they felt about the outcome of several recent well-known studies. They then conducted replication efforts on those same studies to determine if they could reproduce the findings by the researchers in the original efforts. They then sent the results of their replication efforts to the social scientists who had been queried prior to their effort, and once again asked them how they felt about the results of the original team.
In looking at their data, and factoring out related biases, they found that most of those scientists that participated lost some confidence in the results of studies when the researchers could not replicate results and gained some confidence in them when they could. The researchers suggest that this indicates that scientists, at least those in social fields, are able to rise above their beliefs when faced with scientific evidence, ensuring that science is indeed allowed to progress, despite it being conducted by fallible human beings.
This could have dramatic consequences for the SiP (Silicon Photonics) — a hot topic for those working in the field of integrated optics. Integrated optics is a critical technology involved in advanced telecommunications networks, and showing increasing importance in quantum research and devices, such as QKD (Quantum Key Distribution) and in various entanglement type experiments (involved in Quantum Compute).
“This is the holy grail of photonics,” says Jonathan Bradley, an assistant professor in the Department of Engineering Physics (and the student’s co-supervisor) in an announcement from McMaster University. “Fabricating a laser on silicon has been a longstanding challenge.” Bradley notes that Miarabbas Kiani’s achievement is remarkable not only for demonstrating a working laser on a silicon chip, but also for doing so in a simple, cost-effective way that’s compatible with existing global manufacturing facilities. This compatibility is essential, as it allows for volume manufacturing at low cost. “If it costs too much, you can’t mass produce it,” says Bradley.
Suppose you are trying to transmit a message. Convert each character into bits, and each bit into a signal. Then send it, over copper or fiber or air. Try as you might to be as careful as possible, what is received on the other side will not be the same as what you began with. Noise never fails to corrupt.
In the 1940s, computer scientists first confronted the unavoidable problem of noise. Five decades later, they came up with an elegant approach to sidestepping it: What if you could encode a message so that it would be obvious if it had been garbled before your recipient even read it? A book can’t be judged by its cover, but this message could.
They called this property local testability, because such a message can be tested super-fast in just a few spots to ascertain its correctness. Over the next 30 years, researchers made substantial progress toward creating such a test, but their efforts always fell short. Many thought local testability would never be achieved in its ideal form.
Now, in a preprint released on November 8, the computer scientist Irit Dinur of the Weizmann Institute of Science and four mathematicians, Shai Evra, Ron Livne, Alex Lubotzky and Shahar Mozes, all at the Hebrew University of Jerusalem, have found it.
[…]
Their new technique transforms a message into a super-canary, an object that testifies to its health better than any other message yet known. Any corruption of significance that is buried anywhere in its superstructure becomes apparent from simple tests at a few spots.
“This is not something that seems plausible,” said Madhu Sudan of Harvard University. “This result suddenly says you can do it.”
[…]
To work well, a code must have several properties. First, the codewords in it should not be too similar: If a code contained the codewords 0000 and 0001, it would only take one bit-flip’s worth of noise to confuse the two words. Second, codewords should not be too long. Repeating bits may make a message more durable, but they also make it take longer to send.
These two properties are called distance and rate. A good code should have both a large distance (between distinct codewords) and a high rate (of transmitting real information).
[…]
To understand why testability is so hard to obtain, we need to think of a message not just as a string of bits, but as a mathematical graph: a collection of vertices (dots) connected by edges (lines).
[…]
Hamming’s work set the stage for the ubiquitous error-correcting codes of the 1980s. He came up with a rule that each message should be paired with a set of receipts, which keep an account of its bits. More specifically, each receipt is the sum of a carefully chosen subset of bits from the message. When this sum has an even value, the receipt is marked 0, and when it has an odd value, the receipt is marked 1. Each receipt is represented by one single bit, in other words, which researchers call a parity check or parity bit.
Hamming specified a procedure for appending the receipts to a message. A recipient could then detect errors by attempting to reproduce the receipts, calculating the sums for themselves. These Hamming codes work remarkably well, and they are the starting point for seeing codes as graphs and graphs as codes.
[…]
Expander graphs are distinguished by two properties that can seem contradictory. First, they are sparse: Each node is connected to relatively few other nodes. Second, they have a property called expandedness — the reason for their name — which means that no set of nodes can be bottlenecks that few edges pass through. Each node is well connected to other nodes, in other words — despite the scarcity of the connections it has.
[…]
However, choosing codewords completely at random would make for an unpredictable dictionary that was excessively hard to sort through. In other words, Shannon showed that good codes exist, but his method for making them didn’t work well.
[…]
However, local testability was not possible. Suppose that you had a valid codeword from an expander code, and you removed one receipt, or parity bit, from one single node. That would constitute a new code, which would have many more valid codewords than the first code, since there would be one less receipt they needed to satisfy. For someone working off the original code, those new codewords would satisfy the receipts at most nodes — all of them, except the one where the receipt was erased. And yet, because both codes have a large distance, the new codeword that seems correct would be extremely far from the original set of codewords. Local testability was simply incompatible with expander codes.
[…]
Local testability was achieved by 2007, but only at the cost of other parameters, like rate and distance. In particular, these parameters would degrade as a codeword became large. In a world constantly seeking to send and store larger messages, these diminishing returns were a major flaw.
[…]
But in 2017, a new source of ideas emerged. Dinur and Lubotzky began working together while attending a yearlong research program at the Israel Institute for Advanced Studies. They came to believe that a 1973 result by the mathematician Howard Garland might hold just what computer scientists sought. Whereas ordinary expander graphs are essentially one-dimensional structures, with each edge extending in only one direction, Garland had created a mathematical object that could be interpreted as an expander graph that spanned higher dimensions, with, for example, the graph’s edges redefined as squares or cubes.
Garland’s high-dimensional expander graphs had properties that seemed ideal for local testability. They must be deliberately constructed from scratch, making them a natural antithesis of randomness. And their nodes are so interconnected that their local characteristics become virtually indistinguishable from how they look globally.
[…]
In their new work, the authors figured out how to assemble expander graphs to create a new graph that leads to the optimal form of locally testable code. They call their graph a left-right Cayley complex.
As in Garland’s work, the building blocks of their graph are no longer one-dimensional edges, but two-dimensional squares. Each information bit from a codeword is assigned to a square, and parity bits (or receipts) are assigned to edges and corners (which are nodes). Each node therefore defines the values of bits (or squares) that can be connected to it.
To get a sense of what their graph looks like, imagine observing it from the inside, standing on a single edge. They construct their graph such that every edge has a fixed number of squares attached. Therefore, from your vantage point you’d feel as if you were looking out from the spine of a booklet. However, from the other three sides of the booklet’s pages, you’d see the spines of new booklets branching from them as well. Booklets would keep branching out from each edge ad infinitum.
“It’s impossible to visualize. That’s the whole point,” said Lubotzky. “That’s why it is so sophisticated.”
Crucially, the complicated graph also shares the properties of an expander graph, like sparseness and connectedness, but with a much richer local structure. For example, an observer sitting at one vertex of a high-dimensional expander could use this structure to straightforwardly infer that the entire graph is strongly connected.
“What’s the opposite of randomness? It’s structure,” said Evra. “The key to local testability is structure.”
To see how this graph leads to a locally testable code, consider that in an expander code, if a bit (which is an edge) is in error, that error can only be detected by checking the receipts at its immediately neighboring nodes. But in a left-right Cayley complex, if a bit (a square) is in error, that error is visible from multiple different nodes, including some that are not even connected to each other by an edge.
In this way, a test at one node can reveal information about errors from far away nodes. By making use of higher dimensions, the graph is ultimately connected in ways that go beyond what we typically even think of as connections.
In addition to testability, the new code maintains rate, distance and other desired properties, even as codewords scale, proving the c3 conjecture true. It establishes a new state of the art for error-correcting codes, and it also marks the first substantial payoff from bringing the mathematics of high-dimensional expanders to bear on codes.
UK lawmakers are sick and tired of shitty internet of things passwords and are whipping out legislation with steep penalties and bans to prove it. The new legislation, introduced to the UK Parliament this week, would ban universal default passwords and work to create what supporters are calling a “firewall around everyday tech.”
Specifically, the bill, called The Product Security and Telecommunications Infrastructure Bill (PSTI), would require unique passwords for internet-connected devices and would prevent those passwords from being reset to universal factory defaults. The bill would also force companies to increase transparency around when their products require security updates and patches, a practice only 20% of firms currently engage in, according to a statement accompanying the bill.
These bolstered security proposals would be overseen by a regulator with sharpened teeth: companies refusing to comply with the security standards could reportedly face fines of £10 million or four percent of their global revenues.
We believe our all-electric ‘Spirit of Innovation’ aircraft is the world’s fastest all-electric aircraft, setting three new world records. We have submitted data to the Fédération Aéronautique Internationale (FAI) – the World Air Sports Federation who control and certify world aeronautical and astronautical records – that at 15:45 (GMT) on 16 November 2021, the aircraft reached a top speed of 555.9 km/h (345.4 mph) over 3 kilometres, smashing the existing record by 213.04 km/h (132mph). In further runs at the UK Ministry of Defence’s Boscombe Down experimental aircraft testing site, the aircraft achieved 532.1km/h (330 mph) over 15 kilometres – 292.8km/h (182mph) faster than the previous record – and broke the fastest time to climb to 3000 metres by 60 seconds with a time of 202 seconds, according to our data. We hope that the FAI will certify and officially confirm the achievements of the team in the near future.
During its record-breaking runs, the aircraft clocked up a maximum speed of 623 km/h (387.4 mph) which we believe makes the ‘Spirit of Innovation’ the world’s fastest all-electric vehicle.
Following an investigation, the Irish data protection watchdog issued a €225m (£190m) fine – the second-largest in history over GDPR – and ordered WhatsApp to change its policies.
WhatsApp is appealing against the fine, but is amending its policy documents in Europe and the UK to comply.
However, it insists that nothing about its actual service is changing.
Instead, the tweaks are designed to “add additional detail around our existing practices”, and will only appear in the European version of the privacy policy, which is already different from the version that applies in the rest of the world.
“There are no changes to our processes or contractual agreements with users, and users will not be required to agree to anything or to take any action in order to continue using WhatsApp,” the company said, announcing the change.
The new policy takes effect immediately.
User revolt
In January, WhatsApp users complained about an update to the company’s terms that many believed would result in data being shared with parent company Facebook, which is now called Meta.
Many thought refusing to agree to the new terms and conditions would result in their accounts being blocked.
In reality, very little had changed. However, WhatsApp was forced to delay its changes and spend months fighting the public perception to the contrary.
The new privacy policy contains substantially more information about what exactly is done with users’ information, and how WhatsApp works with Meta, the parent company for WhatsApp, Facebook and Instagram.
Debates around the effectiveness of high-profile Twitter account suspensions and similar bans on abusive users across social media platforms abound. Yet we know little about the effectiveness of warning a user about the possibility of suspending their account as opposed to outright suspensions in reducing hate speech. With a pre-registered experiment, we provide causal evidence that a warning message can reduce the use of hateful language on Twitter, at least in the short term. We design our messages based on the literature on deterrence, and test versions that emphasize the legitimacy of the sender, the credibility of the message, and the costliness of being suspended. We find that the act of warning a user of the potential consequences of their behavior can significantly reduce their hateful language for one week. We also find that warning messages that aim to appear legitimate in the eyes of the target user seem to be the most effective. In light of these findings, we consider the policy implications of platforms adopting a more aggressive approach to warning users that their accounts may be suspended as a tool for reducing hateful speech online.
[…]
we test whether warning users of their potential suspension if they continue using hateful language might be able to reduce online hate speech. To do so, we implemented a pre-registered experiment on Twitter in order to test the ability of “warning messages” about the possibility of future suspensions to reduce hateful language online. More specifically, we identify users who are candidates for suspension in the future based on their prior tweets and download their follower lists before the suspension takes place. After a user gets suspended, we randomly assign some of their followers who have also used hateful language to receive a warning that they, too, may be suspended for the same reason.
Since our tweets aim to deter users from using hateful language, we design them relying on the three mechanisms that the literature on deterrence deems as most effective in reducing deviation behavior: costliness, legitimacy, and credibility. In other words, our experiment allows us to manipulate the degree to which users perceive their suspension as costly, legitimate, and credible.
[…]
Our study provides causal evidence that the act of sending a warning message to a user can significantly decrease their use of hateful language as measured by their ratio of hateful tweets over their total number of tweets. Although we do not find strong evidence that distinguishes between warnings that are high versus low in legitimacy, credibility, or costliness, the high legitimacy messages seem to be the most effective of all the messages tested.
[…]
he coefficient plot in figure 4 shows the effect of sending any type of warning tweet on the ratio of tweets with hateful language over the tweets that a user tweets. The outcome variable is the ratio of hateful tweets over the total number of tweets that a user posted over the week and month following the treatment. The effects thus show the change in this ratio as a result of the treatment.
Figure 4 The effect of sending a warning tweet on reducing hateful language
Note: See table G1 in online appendix G for more details on sample size and control coefficients.
We find support for our first hypothesis: a tweet that warns a user of a potential suspension will lead that user to decrease their ratio of hateful tweets by 0.007 for a week after the treatment. Considering the fact that the average pre-treatment hateful tweet ratio is 0.07 in our sample, this means that a single warning tweet from a user with 100 followers reduced the use of hateful language by 10%.
[…]
The coefficient plot in figure 5 shows the effect of each treatment on the ratio of tweets with hateful language over the tweets that a user tweets. Although the differences across types are minor and thus caveats are warranted, the most effective treatment seems to be the high legitimacy tweet; the legitimacy category also has by far the largest difference between the high- and low-level versions of the three categories of treatment we assessed. Interestingly, the tweets emphasizing the cost of being suspended appear to be the least effective of the three categories; although the effects are in the correctly predicted direction, neither of the cost treatments alone are statistically distinguishable from null effects.
Figure 5 Reduction in hate speech by treatment type
Note: See table G2 in online appendix G for more details on sample size and control coefficients.
An alternative mechanism that could explain the similarity of effects across treatments—as well as the costliness channel apparently being the least effective—is that perhaps instead of deterring people, the warnings might have made them more reflective and attentive about their language use.
[…]
ur results show that only one warning tweet sent by an account with no more than 100 followers can decrease the ratio of tweets with hateful language by up to 10%, with some types of tweets (high legitimacy, emphasizing the legitimacy of the account sending the tweet) suggesting decreases of perhaps as high as 15%–20% in the week following treatment. Considering that we sent our tweets from accounts that have no more than 100 followers, the effects that we report here are conservative estimates, and could be more effective when sent from more popular accounts (Munger Reference Munger2017).
[…]
A recently burgeoning literature shows that online interventions can also decrease behaviors that could harm the other groups by tracking subjects’ behavior over social media. These works rely on online messages on Twitter that sanction the harmful behavior, and succeed in reducing hateful language (Munger Reference Munger2017; Siegel and Badaan Reference Siegel and Badaan2020), and mostly draw on identity politics when designing their sanctioning messages (Charnysh et al. Reference Charnysh, Lucas and Singh2015). We contribute to this recent line of research by showing that warning messages that are designed based on the literature of deterrence can lead to a meaningful decrease in the use of hateful language without leveraging identity dynamics.
[…]
Two options are worthy of discussion: relying on civil society or relying on Twitter. Our experiment was designed to mimic the former option, with our warnings mimicking non-Twitter employees acting on their own with the goal of reducing hate speech/protecting users from being suspended
[…]
hile it is certainly possible that an NGO or a similar entity could try to implement such a program, the more obvious solution would be to have Twitter itself implement the warnings.
[…]
the company reported “testing prompts in 2020 that encouraged people to pause and reconsider a potentially harmful or offensive reply—such as insults, strong language, or hateful remarks—before Tweeting it. Once prompted, people had an opportunity to take a moment and make edits, delete, or send the reply as is.”Footnote 15 This appears to result in 34% of those prompted electing either to review the Tweet before sending, or not to send the Tweet at all.
We note three differences from this endeavor. First, in our warnings, we try to reduce people’s hateful language after they employ hateful language, which is not the same thing as warning people before they employ hateful language. This is a noteworthy difference, which can be a topic for future research in terms of whether the dynamics of retrospective versus prospective warnings significantly differ from each other. Second, Twitter does not inform their users of the examples of suspensions that took place among the people that these users used to follow. Finally, we are making our data publicly available for re-analysis.
We stop short, however, of unambiguously recommending that Twitter simply implement the system we tested without further study because of two important caveats. First, one interesting feature of our findings is that across all of our tests (one week versus four weeks, different versions of the warning—figures 2 (in text) and A1(in the online appendix)) we never once get a positive effect for hate speech usage in the treatment group, let alone a statistically significant positive coefficient, which would have suggested a potential backlash effect whereby the warnings led people to become more hateful. We are reassured by this finding but do think it is an open question whether a warning from Twitter—a large powerful corporation and the owner of the platform—might provoke a different reaction. We obviously could not test for this possibility on our own, and thus we would urge Twitter to conduct its own testing to confirm that our finding about the lack of a backlash continues to hold when the message comes from the platform itself.Footnote 16
The second caveat concerns the possibility of Twitter making mistakes when implementing its suspension policies.
[…]
Despite these caveats, our findings suggest that hate-speech moderations can be effective without priming the salience of the target users’ identity. Explicitly testing the effectiveness of identity versus non-identity motivated interventions will be an important subject for future research.
GoDaddy has admitted to America’s financial watchdog that one or more miscreants broke into its systems and potentially accessed a huge amount of customer data, from email addresses to SSL private keys.
In a filing on Monday to the SEC, the internet giant said that on November 17 it discovered an “unauthorized third-party” had been roaming around part of its Managed WordPress service, which essentially stores and hosts people’s websites.
[…]
Those infosec sleuths, we’re told, found evidence that an intruder had been inside part of GoDaddy’s website provisioning system, described by Comes as a “legacy code base,” since September 6, gaining access using a “compromised password.”
The miscreant was able to view up to 1.2 million customer email addresses and customer ID numbers, and the administrative passwords generated for WordPress instances when they were provisioned. Any such passwords unchanged since the break-in have been reset.
According to GoDaddy, the sFTP and database usernames and passwords of active user accounts were accessible, too, and these have been reset as well.
“For a subset of active customers, the SSL private key was exposed,” Comes added. “We are in the process of issuing and installing new certificates for those customers.” GoDaddy has not responded to a request for further details and exact numbers of users affected.
To grow and spread, cancer cells must evade the immune system. Investigators from Brigham and Women’s Hospital and MIT used the power of nanotechnology to discover a new way that cancer can disarm its would-be cellular attackers by extending out nanoscale tentacles that can reach into an immune cell and pull out its powerpack. Slurping out the immune cell’s mitochondria powers up the cancer cell and depletes the immune cell. The new findings, published in Nature Nanotechnology, could lead to new targets for developing the next generation of immunotherapy against cancer.
“Cancer kills when the immune system is suppressed and cancer cells are able to metastasize, and it appears that nanotubes can help them do both,” said corresponding author Shiladitya Sengupta, PhD, co-director of the Brigham’s Center for Engineered Therapeutics. “This is a completely new mechanism by which cancer cells evade the immune system and it gives us a new target to go after.”
To investigate how cancer cells and immune cells interact at the nanoscale level, Sengupta and colleagues set up experiments in which they co-cultured breast cancer cells and immune cells, such as T cells. Using field-emission scanning electron microscopy, they caught a glimpse of something unusual: Cancer cells and immune cells appeared to be physically connected by tiny tendrils, with widths mostly in the 100-1000 nanometer range. (For comparison, a human hair is approximately 80,000 to 100,000 nanometers). In some cases, the nanotubes came together to form thicker tubes. The team then stained mitochondria — which provide energy for cells — from the T cells with a fluorescent dye and watched as bright green mitochondria were pulled out of the immune cells, through the nanotubes, and into the cancer cells.
The ranks of orbit-capable spaceflight companies just grew ever so slightly. TechCrunchreports Astra has reached orbit for the first time when its Rocket 3 booster launched shortly after 1AM Eastern today (November 20th). The startup put a mass simulator into a 310-mile-high orbit as part of a demonstration for the US Air Force’s Rapid Agile Launch Initiative, which shows how private outfits could quickly and flexibly deliver Space Force payloads.
This success has been a long time in coming. Astra failed to reach orbit three times before, including a second attempt where the rocket reached space but didn’t have enough velocity for an orbital insertion.
Company chief Chris Kemp stressed on Twitter that Astra was “just getting started” despite the success. It’s a significant moment all the same. Companies and researchers wanting access to space currently don’t have many choices — they either have to hitch a ride on one of SpaceX’s not-so-common rideshare missions or turn to a handful of options like Rocket Lab. Astra hopes to produce its relatively modest rockets quickly enough that it delivers many small payloads in a timely fashion. That, in turn, might lower prices and make space more viable.
In recent years, Amazon.com Inc has killed or undermined privacy protections in more than three dozen bills across 25 states, as the e-commerce giant amassed a lucrative trove of personal data on millions of American consumers.
Amazon executives and staffers detail these lobbying victories in confidential documents reviewed by Reuters.
In Virginia, the company boosted political donations tenfold over four years before persuading lawmakers this year to pass an industry-friendly privacy bill that Amazon itself drafted. In California, the company stifled proposed restrictions on the industry’s collection and sharing of consumer voice recordings gathered by tech devices. And in its home state of Washington, Amazon won so many exemptions and amendments to a bill regulating biometric data, such as voice recordings or facial scans, that the resulting 2017 law had “little, if any” impact on its practices, according to an internal Amazon document.
As much as 38 percent of the Internet’s domain name lookup servers are vulnerable to a new attack that allows hackers to send victims to maliciously spoofed addresses masquerading as legitimate domains, like bankofamerica.com or gmail.com.
The exploit, unveiled in research presented today, revives the DNS cache-poisoning attack that researcher Dan Kaminsky disclosed in 2008. He showed that, by masquerading as an authoritative DNS server and using it to flood a DNS resolver with fake lookup results for a trusted domain, an attacker could poison the resolver cache with the spoofed IP address. From then on, anyone relying on the same resolver would be diverted to the same imposter site.
A lack of entropy
The sleight of hand worked because DNS at the time relied on a transaction ID to prove the IP number returned came from an authoritative server rather than an imposter server attempting to send people to a malicious site. The transaction number had only 16 bits, which meant that there were only 65,536 possible transaction IDs.
Kaminsky realized that hackers could exploit the lack of entropy by bombarding a DNS resolver with off-path responses that included each possible ID. Once the resolver received a response with the correct ID, the server would accept the malicious IP and store the result in cache so that everyone else using the same resolver—which typically belongs to a corporation, organization, or ISP—would also be sent to the same malicious server.
The threat raised the specter of hackers being able to redirect thousands or millions of people to phishing or malware sites posing as perfect replicas of the trusted domain they were trying to visit. The threat resulted in industry-wide changes to the domain name system, which acts as a phone book that maps IP addresses to domain names.
Under the new DNS spec, port 53 was no longer the default used for lookup queries. Instead, those requests were sent over a port randomly chosen from the entire range of available UDP ports. By combining the 16 bits of randomness from the transaction ID with an additional 16 bits of entropy from the source port randomization, there were now roughly 134 million possible combinations, making the attack mathematically infeasible.
Unexpected Linux behavior
Now, a research team at the University of California at Riverside has revived the threat. Last year, members of the same team found a side channel in the newer DNS that allowed them to once again infer the transaction number and randomized port number sending resolver-spoofed IPs.
The research and the SADDNS exploit it demonstrated resulted in industry-wide updates that effectively closed the side channel. Now comes the discovery of new side channels that once again make cache poisoning viable.
“In this paper, we conduct an analysis of the previously overlooked attack surface, and are able to uncover even stronger side channels that have existed for over a decade in Linux kernels,” researchers Keyu Man, Xin’an Zhou, and Zhiyun Qian wrote in a research paper being presented at the ACM CCS 2021 conference. “The side channels affect not only Linux but also a wide range of DNS software running on top of it, including BIND, Unbound and dnsmasq. We also find about 38% of open resolvers (by frontend IPs) and 14% (by backend IPs) are vulnerable including the popular DNS services such as OpenDNS and Quad9.”
OpenDNS owner Cisco said: “Cisco Umbrella/Open DNS is not vulnerable to the DNS Cache Poisoning Attack described in CVE-2021-20322, and no Cisco customer action is required. We remediated this issue, tracked via Cisco Bug ID CSCvz51632, as soon as possible after receiving the security researcher’s report.” Quad9 representatives weren’t immediately available for comment.
The side channel for the attacks from both last year and this year involve the Internet Control Message Protocol, or ICMP, which is used to send error and status messages between two servers.
“We find that the handling of ICMP messages (a network diagnostic protocol) in Linux uses shared resources in a predictable manner such that it can be leveraged as a side channel,” researcher Qian wrote in an email. “This allows the attacker to infer the ephemeral port number of a DNS query, and ultimately lead to DNS cache poisoning attacks. It is a serious flaw as Linux is most widely used to host DNS resolvers.” He continued:
The ephemeral port is supposed to be randomly generated for every DNS query and unknown to an off-path attacker. However, once the port number is leaked through a side channel, an attacker can then spoof legitimate-looking DNS responses with the correct port number that contain malicious records and have them accepted (e.g., the malicious record can say chase.com maps to an IP address owned by an attacker).
The reason that the port number can be leaked is that the off-path attacker can actively probe different ports to see which one is the correct one, i.e., through ICMP messages that are essentially network diagnostic messages which have unexpected effects in Linux (which is the key discovery of our work this year). Our observation is that ICMP messages can embed UDP packets, indicating a prior UDP packet had an error (e.g., destination unreachable).
We can actually guess the ephemeral port in the embedded UDP packet and package it in an ICMP probe to a DNS resolver. If the guessed port is correct, it causes some global resource in the Linux kernel to change, which can be indirectly observed. This is how the attacker can infer which ephemeral port is used.
Changing internal state with ICMP probes
The side channel last time around was the rate limit for ICMP. To conserve bandwidth and computing resources, servers will respond to only a set number of requests and then fall silent. The SADDNS exploit used the rate limit as a side channel. But whereas last year’s port inference method used UDP packets to probe which ports were designed to solicit ICMP responses, the attack this time uses ICMP probes directly.
“According to the RFC (standards), ICMP packets are only supposed to be generated *in response* to something,” Qian added. “They themselves should never *solicit* any responses, which means they are ill-suited for port scans (because you don’t get any feedback). However, we find that ICMP probes can actually change some internal state that can actually be observed through a side channel, which is why the whole attack is novel.”
The researchers have proposed several defenses to prevent their attack. One is setting proper socket options such as IP_PMTUDISC_OMIT, which instructs an operating system to ignore so-called ICMP messages, effectively closing the side channel. A downside, then, is that those messages will be ignored, and sometimes such messages are legitimate.
Another proposed defense is randomizing the caching structure to make the side channel unusable. A third is to reject ICMP redirects.
The vulnerability affects DNS software, including BIND, Unbound, and dnsmasq, when they run on Linux. The researchers tested to see if DNS software was vulnerable when running on either Windows or Free BSD and found no evidence it was. Since macOS uses the FreeBSD network stack, they assume it isn’t vulnerable either.