About Robin Edgar

Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft

Someone is hacking receipt printers with ‘antiwork’ messages

Hackers are attacking business receipt printers to insert pro-labor messages, according to a report from Vice and posts on Reddit. “Are you being underpaid?”, reads one message and “How can the McDonald’s in Denmark pay their staff $22 an hour and still manage to sell a Big Mac for less than in America?” another states.

Numerous similar images have been posted on Reddit, Twitter and elsewhere. The messages vary, but most point readers toward the r/antiwork subreddit that recently became popular during the COVID-19 pandemic, as workers starting demanding more rights.

Some users suggested that the messages were fake, but a cybersecurity firm that monitors the internet told Vice that they’re legit. “Someone is… blast[ing] raw TCP data directly to printer services across the internet,” GreyNoise founder Andrew Morris told Vice. “Basically to every single device that has port TCP 9100 open, and print[ing] a pre-written document that references /r/antiwork with some workers rights/counter capitalist messaging.”

The individual[s] behind the attack are using 25 separate servers, according to Morris, so blocking one IP won’t necessarily stop the attacks. “A technical person is broadcasting print requests for a document containing workers rights messaging to all printers that are misconfigured to be exposed to the internet, and we’ve confirmed that it is printing successfully in some number of places,” he said.

[…]

Source: Someone is hacking receipt printers with ‘antiwork’ messages | Engadget

Studying our solar system’s protective bubble

Astrophysicists believe the heliosphere protects the planets within our solar system from powerful radiation emanating from supernovas, the final explosions of dying stars throughout the universe. They believe the heliosphere extends far beyond our solar system, but despite the massive buffer against cosmic radiation that the heliosphere provides Earth’s life-forms, no one really knows the shape of the heliosphere—or, for that matter, the size of it.

[…]

Opher’s team has constructed some of the most compelling computer simulations of the heliosphere, based on models built on observable data and theoretical astrophysics.

[…]

a paper published by Opher and collaborators in Astrophysical Journal reveals that neutral hydrogen particles streaming from outside our solar system most likely play a crucial role in the way our heliosphere takes shape.

[…]

models predict that the heliosphere, traveling in tandem with our sun and encompassing our solar system, doesn’t appear to be stable. Other models of the heliosphere developed by other astrophysicists tend to depict the heliosphere as having a comet-like shape, with a jet—or a “tail”—streaming behind in its wake. In contrast, Opher’s model suggests the heliosphere is shaped more like a croissant or even a donut.

The reason for that? Neutral hydrogen particles, so-called because they have equal amounts of positive and negative charge that net no charge at all.

“They come streaming through the solar system,” Opher says. Using a computational model like a recipe to test the effect of ‘neutrals’ on the shape of the heliosphere, she “took one ingredient out of the cake—the neutrals—and noticed that the jets coming from the sun, shaping the heliosphere, become super stable. When I put them back in, things start bending, the center axis starts wiggling, and that means that something inside the heliospheric jets is becoming very unstable.”

Instability like that would theoretically cause disturbance in the solar winds and jets emanating from our sun, causing the heliosphere to split its shape—into a croissant-like form. Although astrophysicists haven’t yet developed ways to observe the actual shape of the heliosphere, Opher’s model suggests the presence of neutrals slamming into our system would make it impossible for the heliosphere to flow uniformly like a shooting comet. And one thing is for sure—neutrals are definitely pelting their way through space.

[…]

Source: Studying our solar system’s protective bubble

U.S. Indicts Two Men for Running a $20 Million YouTube Content ID Scam – after 4 years of warnings

Two men have been indicted by a grand jury for running a massive YouTube Content ID scam that netted the pair more than $20m. Webster Batista Fernandez and Jose Teran managed to convince a YouTube partner that the pair owned the rights to 50,000+ tracks and then illegally monetized user uploads over a period of four years.

[…]

YouTube previously said that it paid $5.5 billion in ad revenue to rightsholders from content claimed and monetized through Content ID but the system doesn’t always work exactly as planned.

Over the years, countless YouTube users have complained that their videos have been claimed and monetized by entities that apparently have no right to do so but, fearful of what a complaint might do to the status of their accounts, many opted to withdraw from battles they feared they might lose.

[…]

Complaints are not hard to find. Large numbers of YouTube videos uploaded by victims of the scam dating back years litter the platform, while a dedicated Twitter account and a popular hashtag have been complaining about MediaMuv since 2018.

 

Mediamuv
 

As early as 2017, complaints were being made on YouTube/Google’s support forums, with just one receiving more than 150 replies.

“I want to make a claim through this place, since a few days ago a said company called MEDIAMUV IS STEALING CONTENT FROM MY CHANNEL AND FROM OTHER USERS, does anyone know something about said company?” one reads.

“[I] investigated and there is nothing in this respect. I only found a channel saying that several users are being robbed and that when they come to upload their own songs, MEDIAMUV detects the videos as theirs.”

[…]

Source: U.S. Indicts Two Men for Running a $20 Million YouTube Content ID Scam * TorrentFreak

Someone Is Running Hundreds of Malicious Servers on Tor Network

New research shows that someone has been running hundreds of malicious servers on the Tor network, potentially in an attempt to de-anonymize users and unmask their web activity. As first reported by The Record, the activity would appear to be emanating from one particular user who is persistent, sophisticated, and somehow has the resources to run droves of high-bandwidth servers for years on end.

[…]

The malicious servers were initially spotted by a security researcher who goes by the pseudonym “nusenu” and who operates their own node on the Tor network. On their Medium, nusenu writes that they first uncovered evidence of the threat actor—which they have dubbed “KAX17”—back in 2019. After doing further research into KAX17, they discovered that they had been active on the network as far back as 2017.

In essence, KAX appears to be running large segments of Tor’s network—potentially in the hopes of being able to track the path of specific web users and unmask them.

[…]

in the case of KAX17, the threat actor appears to be substantially better resourced than your average dark web malcontent: they have been running literally hundreds of malicious servers all over the world—activity that amounts to “running large fractions of the tor network,” nusenu writes. With that amount of activity, the chances that a Tor user’s circuit could be traced by KAX is relatively high, the researcher shows.

Indeed, according to nusenu’s research, KAX at one point had so many servers—some 900—that you had a 16 percent likelihood of using their relay as a first “hop” (i.e., node in your circuit) when you logged onto Tor. You had a 35 percent chance of using one of their relays during your 2nd “hop,” and a 5 percent chance of using them as an exit relay, nusenu writes.

There’s also evidence that the threat actor engaged in Tor forum discussions, during which they seem to have lobbied against administrative actions that would have removed their servers from the network.

[…]

Many of the threat actor’s servers were removed by the Tor directory authorities in October 2019. Then, just last month, authorities again removed a large number of relays that seemed suspicious and were tied to the threat actor. However, in both cases, the actor seems to have immediately bounced back and begun reconstituting, nusenu writes.

It’s unclear who might be behind all this, but it seems that, whoever they are, they have a lot of resources. “We have no evidence, that they are actually performing de-anonymization attacks, but they are in a position to do so,” nusenu writes. “The fact that someone runs such a large network fraction of relays…is enough to ring all kinds of alarm bells.”

“Their actions and motives are not well understood,” nusenu added.

Source: Someone Is Running Hundreds of Malicious Servers on Tor Network

U.S. State Department phones hacked with Israeli company NSO spyware

Apple Inc iPhones of at least nine U.S. State Department employees were hacked by an unknown assailant using sophisticated spyware developed by the Israel-based NSO Group, according to four people familiar with the matter.

The hacks, which took place in the last several months, hit U.S. officials either based in Uganda or focused on matters concerning the East African country, two of the sources said.

The intrusions, first reported here, represent the widest known hacks of U.S. officials through NSO technology. Previously, a list of numbers with potential targets including some American officials surfaced in reporting on NSO, but it was not clear whether intrusions were always tried or succeeded.

Reuters could not determine who launched the latest cyberattacks.

NSO Group said in a statement on Thursday that it did not have any indication their tools were used but canceled access for the relevant customers and would investigate based on the Reuters inquiry.

[…]

Source: U.S. State Department phones hacked with Israeli company spyware – sources | Reuters

Qualcomm’s new always-on smartphone camera is always looking out for you

“Your phone’s front camera is always securely looking for your face, even if you don’t touch it or raise to wake it.” That’s how Qualcomm Technologies vice president of product management Judd Heape introduced the company’s new always-on camera capabilities in the Snapdragon 8 Gen 1 processor set to arrive in top-shelf Android phones early next year.

[…]

But for those of us with any sense of how modern technology is used to violate our privacy, a camera on our phone that’s always capturing images even when we’re not using it sounds like the stuff of nightmares and has a cost to our privacy that far outweighs any potential convenience benefits.

Qualcomm’s main pitch for this feature is for unlocking your phone any time you glance at it, even if it’s just sitting on a table or propped up on a stand. You don’t need to pick it up or tap the screen or say a voice command — it just unlocks when it sees your face. I can see this being useful if your hands are messy or otherwise occupied (in its presentation, Qualcomm used the example of using it while cooking a recipe to check the next steps). Maybe you’ve got your phone mounted in your car, and you can just glance over at it to see driving directions without having to take your hands off the steering wheel or leave the screen on the entire time.

[…]

Qualcomm is framing the always-on camera as similar to the always-on microphones that have been in our phones for years. Those are used to listen for voice commands like “Hey Siri” or “Hey Google” (or lol, “Hi Bixby”) and then wake up the phone and provide a response, all without you having to touch or pick up the phone. But the difference is that they are listening for specific wake words and are often limited with what they can do until you do actually pick up your phone and unlock it.

It feels a bit different when it’s a camera that’s always scanning for a likeness.

It’s true that smart home products already have features like this. Google’s Nest Hub Max uses its camera to recognize your face when you walk up to it and greet you with personal information like your calendar. Home security cameras and video doorbells are constantly on, looking for activity or even specific faces. But those devices are in your home, not always carried with you everywhere you go, and generally don’t have your most private information stored on them, like your phone does. They also frequently have features like physical shutters to block the camera or intelligent modes to disable recording when you’re home and only resume it when you aren’t. It’s hard to imagine any phone manufacturer putting a physical shutter on the front of their slim and sleek flagship smartphone.

Lastly, there have been many reports of security breaches and social engineering hacks to enable smart home cameras when they aren’t supposed to be on and then send that feed to remote servers, all without the knowledge of the homeowner. Modern smartphone operating systems now do a good job of telling you when an app is accessing your camera or microphone while you’re using the device, but it’s not clear how they’d be able to inform you of a rogue app tapping into the always-on camera.

To be honest, these things are also pretty damn scary! I understand that Americans have been habituated to ubiquitous surveillance, but here in the EU we still value our privacy and don’t like it much at all.

Ultimately, it comes down to a level of trust — do you trust that Qualcomm has set up the system in a way that prevents the always-on camera from being used for other purposes than intended? Do you trust that the OEM using Qualcomm’s chips won’t do things to interfere with the system, either for their own profit or to satisfy the demands of a government entity?

Even if you do have that trust, there’s a certain level of comfort with an always-on camera on your most personal device that goes beyond where we are currently.

Maybe we’ll just start having to put tape on our smartphone cameras like we already do with laptop webcams.

Source: Qualcomm’s new always-on smartphone camera is a potential privacy nightmare – The Verge

How We Determined Predictive Policing Software Disproportionately Targeted Low-Income, Black, and Latino Neighborhoods

[…]

One of the first, and reportedly most widely used, is PredPol, its name an amalgamation of the words “predictive policing.” The software was derived from an algorithm used to predict earthquake aftershocks that was developed by professors at UCLA and released in 2011. By sending officers to patrol these algorithmically predicted hot spots, these programs promise they will deter illegal behavior.

But law enforcement critics had their own prediction: that the algorithms would send cops to patrol the same neighborhoods they say police always have, those populated by people of color. Because the software relies on past crime data, they said, it would reproduce police departments’ ingrained patterns and perpetuate racial injustice, covering it with a veneer of objective, data-driven science.

PredPol has repeatedly said those criticisms are off-base. The algorithm doesn’t incorporate race data, which, the company says, “eliminates the possibility for privacy or civil rights violations seen with other intelligence-led or predictive policing models.”

There have been few independent, empirical reviews of predictive policing software because the companies that make these programs have not publicly released their raw data.

A seminal, data-driven study about PredPol published in 2016 did not involve actual predictions. Rather the researchers, Kristian Lum and William Isaac, fed drug crime data from Oakland, California, into PredPol’s open-source algorithm to see what it would predict. They found that it would have disproportionately targeted Black and Latino neighborhoods, despite survey data that shows people of all races use drugs at similar rates.

PredPol’s founders conducted their own research two years later using Los Angeles data and said they found the overall rate of arrests for people of color was about the same whether PredPol software or human police analysts made the crime hot spot predictions. Their point was that their software was not worse in terms of arrests for people of color than nonalgorithmic policing.

However, a study published in 2018 by a team of researchers led by one of PredPol’s founders showed that Indianapolis’s Latino population would have endured “from 200% to 400% the amount of patrol as white populations” had it been deployed there, and its Black population would have been subjected to “150% to 250% the amount of patrol compared to white populations.” The researchers said they found a way to tweak the algorithm to reduce that disproportion but that it would result in less accurate predictions—though they said it would still be “potentially more accurate” than human predictions.

[…]

Other predictive police programs have also come under scrutiny. In 2017, the Chicago Sun-Times obtained a database of the city’s Strategic Subject List, which used an algorithm to identify people at risk of becoming victims or perpetrators of violent, gun-related crime. The newspaper reported that 85% of people that the algorithm saddled with the highest risk scores were Black men—some with no violent criminal record whatsoever.

Last year, the Tampa Bay Times published an investigation analyzing the list of people that were forecast to commit future crimes by the Pasco Sheriff’s Office’s predictive tools. Deputies were dispatched to check on people on the list more than 12,500 times. The newspaper reported that at least one in 10 of the people on the list were minors, and many of those young people had only one or two prior arrests yet were subjected to thousands of checks.

For our analysis, we obtained a trove of PredPol crime prediction data that has never before been released by PredPol for unaffiliated academic or journalistic analysis. Gizmodo found it exposed on the open web (the portal is now secured) and downloaded more than 7 million PredPol crime predictions for dozens of American cities and some overseas locations between 2018 and 2021.

[…]

rom Fresno, California, to Niles, Illinois, to Orange County, Florida, to Piscataway, New Jersey. We supplemented our inquiry with Census data, including racial and ethnic identities and household incomes of people living in each jurisdiction—both in areas that the algorithm targeted for enforcement and those it did not target.

Overall, we found that PredPol’s algorithm relentlessly targeted the Census block groups in each jurisdiction that were the most heavily populated by people of color and the poor, particularly those containing public and subsidized housing. The algorithm generated far fewer predictions for block groups with more White residents.

Analyzing entire jurisdictions, we observed that the proportion of Black and Latino residents was higher in the most-targeted block groups and lower in the least-targeted block groups (about 10% of which had zero predictions) compared to the overall jurisdiction. We also observed the opposite trend for the White population: The least-targeted block groups contained a higher proportion of White residents than the jurisdiction overall, and the most-targeted block groups contained a lower proportion.

[…]

We also found that PredPol’s predictions often fell disproportionately in places where the poorest residents live

[…]

To try to determine the effects of PredPol predictions on crime and policing, we filed more than 100 public records requests and compiled a database of more than 600,000 arrests, police stops, and use-of-force incidents. But most agencies refused to give us any data. Only 11 provided at least some of the necessary data.

For the 11 departments that provided arrest data, we found that rates of arrest in predicted areas remained the same whether PredPol predicted a crime that day or not. In other words, we did not find a strong correlation between arrests and predictions. (See the Limitations section for more information about this analysis.)

We do not definitively know how police acted on any individual crime prediction because we were refused that data by nearly every police department.

[…]

Overall, our analysis suggests that the algorithm, at best, reproduced how officers have been policing, and at worst, would reinforce those patterns if its policing recommendations were followed.

[…]

 

Source: How We Determined Predictive Policing Software Disproportionately Targeted Low-Income, Black, and Latino Neighborhoods

Clear These Recalled Cancer Causing Antiperspirants From Your Home

If you’re a fan of aerosol spray antiperspirants and deodorants, you’re going to want to check to see whether the one you use is part of a voluntary recall issued by Procter & Gamble (P&G).

The recall comes after a citizen’s petition filed with the U.S. Food and Drug Administration (FDA) last month that claims more than half of the batches of antiperspirant and deodorant sprays they tested contained benzene—a chemical that, when found at high levels, can cause cancer. Here’s what you need to know.

[…]

They found that out of the 108 batches of products tested, 59 (or 54%) of them had levels of benzene exceeding the 2 parts per million permitted by the FDA.

[…]

Valisure’s tests included 30 different brands, but according to CNN, P&G is the only company to issue a recall for its products containing benzene; specifically, the recall covers 17 types of Old Spice and Secret antiperspirant.

The full list of products Valisure tested and found to contain more than 2 parts per million of benzene can be found on the company’s petition to the FDA. Examples include products from other familiar brands like Tag, Sure, Equate, Suave, Right Guard, Brut, Summer’s Eve, Right Guard, Power Stick, Soft & Dri, and Victoria’s Secret.

If you have purchased any of the Old Spice or Secret products included in P&G’s recall, the company instructs consumers to stop using them, throw them out, and contact their customer care team (at 888-339-7689 from Monday – Friday from 9 a.m. – 6 p.m. EST) to learn how to be reimbursed for eligible products.

Source: Clear These Recalled Antiperspirants From Your Home

Really stupid “smart contract” bug let hackers steal $31 million in digital coin

Blockchain startup MonoX Finance said on Wednesday that a hacker stole $31 million by exploiting a bug in software the service uses to draft smart contracts.

The company uses a decentralized finance protocol known as MonoX that lets users trade digital currency tokens without some of the requirements of traditional exchanges. “Project owners can list their tokens without the burden of capital requirements and focus on using funds for building the project instead of providing liquidity,” MonoX company representatives say here. “It works by grouping deposited tokens into a virtual pair with vCASH, to offer a single token pool design.”

An accounting error built into the company’s software let an attacker inflate the price of the MONO token and to then use it to cash out all the other deposited tokens, MonoX Finance revealed in a post. The haul amounted to $31 million worth of tokens on the Ethereum or Polygon blockchains, both of which are supported by the MonoX protocol.

Specifically, the hack used the same token as both the tokenIn and tokenOut, which are methods for exchanging the value of one token for another. MonoX updates prices after each swap by calculating new prices for both tokens. When the swap is completed, the price of tokenIn—that is, the token sent by the user—decreases and the price of tokenOut—or the token received by the user—increases.

By using the same token for both tokenIn and tokenOut, the hacker greatly inflated the price of the MONO token because the updating of the tokenOut overwrote the price update of the tokenIn. The hacker then exchanged the token for $31 million worth of tokens on the Ethereum and Polygon blockchains.

There’s no practical reason for exchanging a token for the same token, and therefore the software that conducts trades should never have allowed such transactions. Alas, it did, despite MonoX receiving three security audits this year.

[…]

Blockchain researcher Igor Igamberdiev took to Twitter to break down the makeup of the drained tokens. Tokens included $18.2 million in Wrapped Ethereum, $10.5 in MATIC tokens, and $2 million worth of WBTC. The haul also included smaller amounts of tokens for Wrapped Bitcoin, Chainlink, Unit Protocol, Aavegotchi, and Immutable X.

Only the latest DeFi hack

MonoX isn’t the only decentralized finance protocol to fall victim to a multimillion-dollar hack. In October, Indexed Finance said it lost about $16 million in a hack that exploited the way it rebalances index pools. Earlier this month, blockchain-analysis company Elliptic said so-called DeFi protocols have lost $12 billion to date due to theft and fraud. Losses in the first roughly 10 months of this year reached $10.5 billion, up from $1.5 billion in 2020.

[…]

Source: Really stupid “smart contract” bug let hackers steal $31 million in digital coin | Ars Technica

Elon Musk Email Warns of Potential SpaceX Bankruptcy

SpaceX employees received a nightmare email over the holiday weekend from CEO Elon Musk, warning them of a brewing crisis with its Raptor engine production that, if unsolved, could result in the company’s bankruptcy. The email, obtained by SpaceExplored, CNBC, and The Verge, urged employees to work over the weekend in a desperate attempt to increase production of the engine meant to power its next-generation Starship launch vehicle.

“Unfortunately, the Raptor production crisis is much worse than it seemed a few weeks ago,” Musk reportedly wrote. “As we have dug into the issues following exiting prior senior management, they have unfortunately turned out to be far more severe than was reported. There is no way to sugarcoat this.”

[…]

In his email, Musk advised workers to cut their holiday weekend short and called for an “all hands on deck to recover from what is, quite frankly, a disaster.” Summing up the problem, Musk warned the company could face bankruptcy if it could not get Starship flights running once every two weeks in 2022. If all of this sounds familiar, that’s because Musk has previously spoken publicly about times where both SpaceX and Tesla were on the verge of bankruptcy in their early years. More recently Musk claimed Tesla came within “single digits” of bankruptcy as recent as 2018.

[…]

The alarming news comes near the close of what’s been an otherwise stellar year for SpaceX. In 11 months SpaceX managed to launch 25 successful Falcon 9 missions, sent a dozen astronauts to space and drew a roadmap to mass commercialization with its Starlink satellite internet service.

You can read the full email over at The Verge.

Source: Elon Musk Email Warns of Potential SpaceX Bankruptcy

So the peons are taking the brunt and having to fix the failures of upper management – for free, probably.

Malware Attack Via Millions of Phishing Text Messages Spreads in Finland

Finland is working to stop a flood of text messages of an unknown origin that are spreading malware.

The messages with malicious links to malware called FluBot number in the millions, according to Aino-Maria Vayrynen, information security specialist at the National Cyber Security Centre. Telia Co AB, the country’s second-biggest telecommunications operator, has intercepted some hundreds of thousands of messages.

“The malware attack is extremely exceptional and very worrying,” Teemu Makela, chief information security officer at Elisa Oyj, the largest telecoms operator, said by phone. “Considerable numbers of text messages are flying around.”

The messages started beeping of Finns’ mobiles late last week, prompting the National Cyber Security Centre to issue a “severe alert.” The campaign is worse than a previous bout of activity in the summer, Antti Turunen, fraud manager at Telia, said.

Many of the messages claim that the recipient has received a voice mail, asking them to open a link. On Android devices, that brings up a prompt that requests user to allow installation of an application that contains the malware, and on Apple Inc.’s iPhones users are taken to other fraudulent material on the website, authorities said.

[…]

Source: Malware Attack Via Millions of Text Messages Spreads in Finland – Bloomberg

Don’t click on linkbait!

150 HP multi-function printer types vulnerable to exploit

Tricking users into visiting a malicious webpage could allow malicious people to compromise 150 models of HP multi-function printers, according to F-Secure researchers.

The Finland-headquartered infosec firm said it had found “exploitable” flaws in the HP printers that allowed attackers to “seize control of vulnerable devices, steal information, and further infiltrate networks in pursuit of other objectives such as stealing or changing other data” – and, inevitably, “spreading ransomware.”

“In all likelihood, a lot of companies are using these vulnerable devices,” said F-Secure researchers Alexander Bolshev and Timo Hirvonen.

“To make matters worse, many organizations don’t treat printers like other types of endpoints. That means IT and security teams forget about these devices’ basic security hygiene, such as installing updates.”

Tricking a user into visiting a malicious website could, so F-Secure said, result in what the infosec biz described as a “cross-site printing attack.”

The heart of the attack is in the document printed from the malicious site: it contained a “maliciously crafted font” that gave the attacker code execution privileges on the multi-function printer.

[…]

The vulns were publicly disclosed a month ago. The font vulnerability is tracked as CVE-2021-39238 and is listed as affecting HP Enterprise LaserJet, LaserJet Managed, Enterprise PageWide, and PageWide Managed product lines. It is rated as 9.3 out of 10 on the CVSS 3.0 severity scale.

[…]

F-Secure advised putting MFPs inside a separate, firewalled VLAN as well as adding physical security controls including anti-tamper stickers and CCTV.

Updated firmware is available for download from HP, the company said in a statement.

[…]

Source: 150 HP multi-function printer types vulnerable to exploit • The Register

Tensorflow model zoo

A repository that shares tuning results of trained models generated by Tensorflow. Post-training quantization (Weight Quantization, Integer Quantization, Full Integer Quantization, Float16 Quantization), Quantization-aware training. I also try to convert it to OpenVINO’s IR model as much as possible.

TensorFlow Lite, OpenVINO, CoreML, TensorFlow.js, TF-TRT, MediaPipe, ONNX [.tflite, .h5, .pb, saved_model, tfjs, tftrt, mlmodel, .xml/.bin, .onnx]

https://github.com/PINTO0309/PINTO_model_zoo

How do death rates from COVID-19 differ between people who are vaccinated and those who are not?

To understand how the pandemic is evolving, it’s crucial to know how death rates from COVID-19 are affected by vaccination status. The death rate is a key metric that can accurately show us how effective vaccines are against severe forms of the disease. This may change over time when there are changes in the prevalence of COVID-19, and because of factors such as waning immunity, new strains of the virus, and the use of boosters.

On this page, we explain why it is essential to look at death rates by vaccination status rather than the absolute number of deaths among vaccinated and unvaccinated people.

We also visualize this mortality data for the United States, England, and Chile.

Ideally we would produce a global dataset that compiles this data for countries around the world, but we do not have the capacity to do this in our team. As a minimum, we list country-specific sources where you can find similar data for other countries, and we describe how an ideal dataset would be formatted.

Why we need to compare the rates of death between vaccinated and unvaccinated

During a pandemic, you might see headlines like “Half of those who died from the virus were vaccinated”.

It would be wrong to draw any conclusions about whether the vaccines are protecting people from the virus based on this headline. The headline is not providing enough information to draw any conclusions.

Let’s think through an example to see this.

Imagine we live in a place with a population of 60 people.

Base rate fallacy explanation 02 1

Then we learn that of the 10 who died from the virus, 50% were vaccinated.

Base rate fallacy

The newspaper may run the headline “Half of those who died from the virus were vaccinated”. But this headline does not tell us anything about whether the vaccine is protecting people or not.

To be able to say anything, we also need to know about those who did not die: how many people in this population were vaccinated? And how many were not vaccinated?

Base rate fallacy explanation 03

Now we have all the information we need and can calculate the death rates:

  • of 10 unvaccinated people, 5 died → the death rate among the unvaccinated is 50%
  • of 50 vaccinated people, 5 died → the death rate among the vaccinated is 10%

We therefore see that the death rate among the vaccinated is 5-times lower than among the unvaccinated.

In the example, we invented numbers to make it simple to calculate the death rates. But the same logic applies also in the current COVID-19 pandemic. Comparisons of the absolute numbers, as some headlines do, is making a mistake that’s known in statistics as a ‘base rate fallacy’: it ignores the fact that one group is much larger than the other. It is important to avoid this mistake, especially now, as in more and more countries the number of people who are vaccinated against COVID-19 is much larger than the number of people who are unvaccinated (see our vaccination data).

This example was illustrating how to think about these statistics in a hypothetical case. Below, you can find the real data for the situation in the COVID-19 pandemic now.

Data on COVID-19 mortality by vaccination status

[…]

Source: How do death rates from COVID-19 differ between people who are vaccinated and those who are not? – Our World in Data

Click on the  link for some pretty clear visual graphs

Replacement Motherboard Brings New Lease Of Life To Classic T60 / T61 Thinkpads

[…]Even the best hardware eventually becomes obsolete when it can no longer run modern software: with a 2.0 GHz Core Duo and 3 GB of RAM you can still browse the web and do word processing today, but you can forget about 4K video or a 64-bit OS. Luckily, there’s hope for those who are just not ready to part with their trusty Thinkpads: [Xue Yao] has designed a replacement motherboard that fits the T60/T61 range, bringing them firmly into the present day. The T700 motherboard is currently in its prototype phase, with series production expected to start in early 2022, funded through a crowdfunding campaign.

Designing a motherboard for a modern CPU is no mean feat, and making it fit an existing laptop, with all the odd shapes and less-than-standard connections, is even more impressive. The T700 has an Intel Core i7 CPU with four cores running at 2.8 GHz, while two RAM slots allow for up to 64 GB of DDR4-3200 memory. There are modern USB-A and USB-C ports as well as well as a 6 Gbps SATA interface and two m.2 slots for your SSDs.

As for the display, the T700 motherboard will happily connect to the original screens built into the T60/T61, or to any of a range of aftermarket LED based replacements. A Thunderbolt connector is available, but only operates in USB-C mode due to firmware issues; according to the project page, full support for Thunderbolt 4 is expected once the open-source coreboot firmware has been ported to the T700 platform.

We love projects like this that extend the useful life of classic computers to keep them running way past their expected service life. But impressive though this is, it’s not the first time someone has made a replacement motherboard for the Thinkpad line; we covered a project from the nb51 forum back in 2018, which formed the basis for today’s project. We’ve seen lots of other useful Thinkpad hacks over the years, from replacing the display to revitalizing the batteries. Thanks to [René] for the tip.

Source: Replacement Motherboard Brings New Lease Of Life To Classic Thinkpads | Hackaday

Hitting the Books: How Amazon laundered the ‘myth of the founder’ into a business empire

We’ve heard the fable of “the self-made billionaire” a thousand times: some unrecognized genius toiling away in a suburban garage stumbles upon The Next Big Thing, thereby single-handedly revolutionizing their industry and becoming insanely rich in the process — all while comfortably ignoring the fact that they’d received $300,000 in seed funding from their already rich, politically-connected parents to do so.

In The Warehouse: Workers and Robots at Amazon, Alessandro Delfanti, associate professor at the University of Toronto and author of Biohackers: The Politics of Open Science, deftly examines the dichotomy between Amazon’s public personas and its union-busting, worker-surveilling behavior in fulfillment centers around the world — and how it leverages cutting edge technologies to keep its employees’ collective noses to the grindstone, pissing in water bottles. In the excerpt below, Delfanti examines the way in which our current batch of digital robber barons lean on the classic redemption myth to launder their images into that of wonderkids deserving of unabashed praise.

[…]

Source: Hitting the Books: How Amazon laundered the ‘myth of the founder’ into a business empire | Engadget

3D-printed ‘living ink’ could lead to self-repairing buildings

Phys.org reports scientists have developed a “living ink” you could use to print equally alive materials usable for creating 3D structures. The team genetically engineered cells for E. Coli and other microbes to create living nanofibers, bundled those fibers and added other materials to produce an ink you could use in a standard 3D printer.

Researchers have tried producing living material before, but it has been difficult to get those substances to fit intended 3D structures. That wasn’t an issue here. The scientists created one material that released an anti-cancer drug when induced with chemicals, while another removed the toxin BPA from the environment. The designs can be tailored to other tasks, too.

Any practical uses could still be some ways off. It’s not yet clear how you’d mass-produce the ink, for example. However, there’s potential beyond the immediate medical and anti-pollution efforts. The creators envisioned buildings that repair themselves, or self-assembling materials for Moon and Mars buildings that could reduce the need for resources from Earth. The ink could even manufacture itself in the right circumstances — you might not need much more than a few basic resources to produce whatever you need.

Source: 3D-printed ‘living ink’ could lead to self-repairing buildings | Engadget

Testing social scientists with replication studies shows them capable of changing their beliefs

A team of researchers from the University of Alabama, the University of Melbourne and the University of California has found that social scientists are able to change their beliefs regarding the outcome of an experiment when given the chance. In a paper published in the journal Nature Human Behavior, the group describes how they tested the ability of scientists to change their beliefs about a scientific idea when shown evidence of replicability. Michael Gordon and Thomas Pfeifer with Massey University have published a News & Views piece in the same journal issue explaining why scientists must be able to update their beliefs.

The researchers set out to study a conundrum in science. It is generally accepted that scientific progress can only be made if scientists update their beliefs when new ideas come along. The conundrum is that scientists are human beings and human beings are notoriously difficult to sway from their beliefs. To find out if this might be a problem in general science endeavors, the researchers created an environment that allowed for testing the possibility.

The work involved sending out questionnaires to 1,100 asking them how they felt about the outcome of several recent well-known studies. They then conducted replication efforts on those same studies to determine if they could reproduce the findings by the researchers in the original efforts. They then sent the results of their replication efforts to the social scientists who had been queried prior to their effort, and once again asked them how they felt about the results of the original team.

In looking at their data, and factoring out related biases, they found that most of those scientists that participated lost some confidence in the results of studies when the researchers could not replicate results and gained some confidence in them when they could. The researchers suggest that this indicates that scientists, at least those in social fields, are able to rise above their beliefs when faced with , ensuring that science is indeed allowed to progress, despite it being conducted by fallible human beings.

Source: Testing social scientists with replication studies shows them capable of changing their beliefs

BreakthroughCreates Laser In Silicon

Long sought-after, and previously thought impossible — a McMaster University PhD student in Hamilton Canada demonstrates a cost-effective and simple laser in silicon.

This could have dramatic consequences for the SiP (Silicon Photonics) — a hot topic for those working in the field of integrated optics. Integrated optics is a critical technology involved in advanced telecommunications networks, and showing increasing importance in quantum research and devices, such as QKD (Quantum Key Distribution) and in various entanglement type experiments (involved in Quantum Compute).
“This is the holy grail of photonics,” says Jonathan Bradley, an assistant professor in the Department of Engineering Physics (and the student’s co-supervisor) in an announcement from McMaster University. “Fabricating a laser on silicon has been a longstanding challenge.” Bradley notes that Miarabbas Kiani’s achievement is remarkable not only for demonstrating a working laser on a silicon chip, but also for doing so in a simple, cost-effective way that’s compatible with existing global manufacturing facilities. This compatibility is essential, as it allows for volume manufacturing at low cost. “If it costs too much, you can’t mass produce it,” says Bradley.

Source: Breakthrough By McMaster PhD Student Creates Laser In Silicon – Slashdot

Researchers Defeat Randomness to Create perfect local testability for information

Suppose you are trying to transmit a message. Convert each character into bits, and each bit into a signal. Then send it, over copper or fiber or air. Try as you might to be as careful as possible, what is received on the other side will not be the same as what you began with. Noise never fails to corrupt.

In the 1940s, computer scientists first confronted the unavoidable problem of noise. Five decades later, they came up with an elegant approach to sidestepping it: What if you could encode a message so that it would be obvious if it had been garbled before your recipient even read it? A book can’t be judged by its cover, but this message could.

They called this property local testability, because such a message can be tested super-fast in just a few spots to ascertain its correctness. Over the next 30 years, researchers made substantial progress toward creating such a test, but their efforts always fell short. Many thought local testability would never be achieved in its ideal form.

Now, in a preprint released on November 8, the computer scientist Irit Dinur of the Weizmann Institute of Science and four mathematicians, Shai Evra, Ron Livne, Alex Lubotzky and Shahar Mozes, all at the Hebrew University of Jerusalem, have found it.

[…]

Their new technique transforms a message into a super-canary, an object that testifies to its health better than any other message yet known. Any corruption of significance that is buried anywhere in its superstructure becomes apparent from simple tests at a few spots.

“This is not something that seems plausible,” said Madhu Sudan of Harvard University. “This result suddenly says you can do it.”

[…]

To work well, a code must have several properties. First, the codewords in it should not be too similar: If a code contained the codewords 0000 and 0001, it would only take one bit-flip’s worth of noise to confuse the two words. Second, codewords should not be too long. Repeating bits may make a message more durable, but they also make it take longer to send.

These two properties are called distance and rate. A good code should have both a large distance (between distinct codewords) and a high rate (of transmitting real information).

[…]

To understand why testability is so hard to obtain, we need to think of a message not just as a string of bits, but as a mathematical graph: a collection of vertices (dots) connected by edges (lines).

[…]

Hamming’s work set the stage for the ubiquitous error-correcting codes of the 1980s. He came up with a rule that each message should be paired with a set of receipts, which keep an account of its bits. More specifically, each receipt is the sum of a carefully chosen subset of bits from the message. When this sum has an even value, the receipt is marked 0, and when it has an odd value, the receipt is marked 1. Each receipt is represented by one single bit, in other words, which researchers call a parity check or parity bit.

Hamming specified a procedure for appending the receipts to a message. A recipient could then detect errors by attempting to reproduce the receipts, calculating the sums for themselves. These Hamming codes work remarkably well, and they are the starting point for seeing codes as graphs and graphs as codes.

[…]

Expander graphs are distinguished by two properties that can seem contradictory. First, they are sparse: Each node is connected to relatively few other nodes. Second, they have a property called expandedness — the reason for their name — which means that no set of nodes can be bottlenecks that few edges pass through. Each node is well connected to other nodes, in other words — despite the scarcity of the connections it has.

[…]

However, choosing codewords completely at random would make for an unpredictable dictionary that was excessively hard to sort through. In other words, Shannon showed that good codes exist, but his method for making them didn’t work well.

[…]

However, local testability was not possible. Suppose that you had a valid codeword from an expander code, and you removed one receipt, or parity bit, from one single node. That would constitute a new code, which would have many more valid codewords than the first code, since there would be one less receipt they needed to satisfy. For someone working off the original code, those new codewords would satisfy the receipts at most nodes — all of them, except the one where the receipt was erased. And yet, because both codes have a large distance, the new codeword that seems correct would be extremely far from the original set of codewords. Local testability was simply incompatible with expander codes.

[…]

Local testability was achieved by 2007, but only at the cost of other parameters, like rate and distance. In particular, these parameters would degrade as a codeword became large. In a world constantly seeking to send and store larger messages, these diminishing returns were a major flaw.

[…]

But in 2017, a new source of ideas emerged. Dinur and Lubotzky began working together while attending a yearlong research program at the Israel Institute for Advanced Studies. They came to believe that a 1973 result by the mathematician Howard Garland might hold just what computer scientists sought. Whereas ordinary expander graphs are essentially one-dimensional structures, with each edge extending in only one direction, Garland had created a mathematical object that could be interpreted as an expander graph that spanned higher dimensions, with, for example, the graph’s edges redefined as squares or cubes.

Garland’s high-dimensional expander graphs had properties that seemed ideal for local testability. They must be deliberately constructed from scratch, making them a natural antithesis of randomness. And their nodes are so interconnected that their local characteristics become virtually indistinguishable from how they look globally.

[…]

In their new work, the authors figured out how to assemble expander graphs to create a new graph that leads to the optimal form of locally testable code. They call their graph a left-right Cayley complex.

As in Garland’s work, the building blocks of their graph are no longer one-dimensional edges, but two-dimensional squares. Each information bit from a codeword is assigned to a square, and parity bits (or receipts) are assigned to edges and corners (which are nodes). Each node therefore defines the values of bits (or squares) that can be connected to it.

To get a sense of what their graph looks like, imagine observing it from the inside, standing on a single edge. They construct their graph such that every edge has a fixed number of squares attached. Therefore, from your vantage point you’d feel as if you were looking out from the spine of a booklet. However, from the other three sides of the booklet’s pages, you’d see the spines of new booklets branching from them as well. Booklets would keep branching out from each edge ad infinitum.

“It’s impossible to visualize. That’s the whole point,” said Lubotzky. “That’s why it is so sophisticated.”

Crucially, the complicated graph also shares the properties of an expander graph, like sparseness and connectedness, but with a much richer local structure. For example, an observer sitting at one vertex of a high-dimensional expander could use this structure to straightforwardly infer that the entire graph is strongly connected.

“What’s the opposite of randomness? It’s structure,” said Evra. “The key to local testability is structure.”

To see how this graph leads to a locally testable code, consider that in an expander code, if a bit (which is an edge) is in error, that error can only be detected by checking the receipts at its immediately neighboring nodes. But in a left-right Cayley complex, if a bit (a square) is in error, that error is visible from multiple different nodes, including some that are not even connected to each other by an edge.

In this way, a test at one node can reveal information about errors from far away nodes. By making use of higher dimensions, the graph is ultimately connected in ways that go beyond what we typically even think of as connections.

In addition to testability, the new code maintains rate, distance and other desired properties, even as codewords scale, proving the c3 conjecture true. It establishes a new state of the art for error-correcting codes, and it also marks the first substantial payoff from bringing the mathematics of high-dimensional expanders to bear on codes.

[…]

 

Source: Researchers Defeat Randomness to Create Ideal Code | Quanta Magazine

The UK Just Banned Default Passwords and We Should Too

UK lawmakers are sick and tired of shitty internet of things passwords and are whipping out legislation with steep penalties and bans to prove it. The new legislation, introduced to the UK Parliament this week, would ban universal default passwords and work to create what supporters are calling a “firewall around everyday tech.”

Specifically, the bill, called The Product Security and Telecommunications Infrastructure Bill (PSTI), would require unique passwords for internet-connected devices and would prevent those passwords from being reset to universal factory defaults. The bill would also force companies to increase transparency around when their products require security updates and patches, a practice only 20% of firms currently engage in, according to a statement accompanying the bill.

These bolstered security proposals would be overseen by a regulator with sharpened teeth: companies refusing to comply with the security standards could reportedly face fines of £10 million or four percent of their global revenues.

[…]

Source: The UK Just Banned Default Passwords and We Should Too

Also interesting: The Worst Passwords in the Last Decade (And New Ones You Shouldn’t Use)

Rolls Royce flies electric aircraft making it fastest EV in the world

We believe our all-electric ‘Spirit of Innovation’ aircraft is the world’s fastest all-electric aircraft, setting three new world records. We have submitted data to the Fédération Aéronautique Internationale (FAI) – the World Air Sports Federation who control and certify world aeronautical and astronautical records – that at 15:45 (GMT) on 16 November 2021, the aircraft reached a top speed of 555.9 km/h (345.4 mph) over 3 kilometres, smashing the existing record by 213.04 km/h (132mph). In further runs at the UK Ministry of Defence’s Boscombe Down experimental aircraft testing site, the aircraft achieved 532.1km/h (330 mph) over 15 kilometres – 292.8km/h (182mph) faster than the previous record – and broke the fastest time to climb to 3000 metres by 60 seconds with a time of 202 seconds, according to our data. We hope that the FAI will certify and officially confirm the achievements of the team in the near future.

During its record-breaking runs, the aircraft clocked up a maximum speed of 623 km/h (387.4 mph) which we believe makes the ‘Spirit of Innovation’ the world’s fastest all-electric vehicle.

[…]

WhatsApp privacy policy tweaked in Europe after record fine

Following an investigation, the Irish data protection watchdog issued a €225m (£190m) fine – the second-largest in history over GDPR – and ordered WhatsApp to change its policies.

WhatsApp is appealing against the fine, but is amending its policy documents in Europe and the UK to comply.

However, it insists that nothing about its actual service is changing.

Instead, the tweaks are designed to “add additional detail around our existing practices”, and will only appear in the European version of the privacy policy, which is already different from the version that applies in the rest of the world.

“There are no changes to our processes or contractual agreements with users, and users will not be required to agree to anything or to take any action in order to continue using WhatsApp,” the company said, announcing the change.

The new policy takes effect immediately.

User revolt

In January, WhatsApp users complained about an update to the company’s terms that many believed would result in data being shared with parent company Facebook, which is now called Meta.

Many thought refusing to agree to the new terms and conditions would result in their accounts being blocked.

In reality, very little had changed. However, WhatsApp was forced to delay its changes and spend months fighting the public perception to the contrary.

During the confusion, millions of users downloaded WhatsApp competitors such as Signal.

[…]

The new privacy policy contains substantially more information about what exactly is done with users’ information, and how WhatsApp works with Meta, the parent company for WhatsApp, Facebook and Instagram.

Source: WhatsApp privacy policy tweaked in Europe after record fine – BBC News

Short of Suspension: How Suspension Warnings Can Reduce Hate Speech on Twitter

Debates around the effectiveness of high-profile Twitter account suspensions and similar bans on abusive users across social media platforms abound. Yet we know little about the effectiveness of warning a user about the possibility of suspending their account as opposed to outright suspensions in reducing hate speech. With a pre-registered experiment, we provide causal evidence that a warning message can reduce the use of hateful language on Twitter, at least in the short term. We design our messages based on the literature on deterrence, and test versions that emphasize the legitimacy of the sender, the credibility of the message, and the costliness of being suspended. We find that the act of warning a user of the potential consequences of their behavior can significantly reduce their hateful language for one week. We also find that warning messages that aim to appear legitimate in the eyes of the target user seem to be the most effective. In light of these findings, we consider the policy implications of platforms adopting a more aggressive approach to warning users that their accounts may be suspended as a tool for reducing hateful speech online.

[…]

we test whether warning users of their potential suspension if they continue using hateful language might be able to reduce online hate speech. To do so, we implemented a pre-registered experiment on Twitter in order to test the ability of “warning messages” about the possibility of future suspensions to reduce hateful language online. More specifically, we identify users who are candidates for suspension in the future based on their prior tweets and download their follower lists before the suspension takes place. After a user gets suspended, we randomly assign some of their followers who have also used hateful language to receive a warning that they, too, may be suspended for the same reason.

Since our tweets aim to deter users from using hateful language, we design them relying on the three mechanisms that the literature on deterrence deems as most effective in reducing deviation behavior: costliness, legitimacy, and credibility. In other words, our experiment allows us to manipulate the degree to which users perceive their suspension as costly, legitimate, and credible.

[…]

Our study provides causal evidence that the act of sending a warning message to a user can significantly decrease their use of hateful language as measured by their ratio of hateful tweets over their total number of tweets. Although we do not find strong evidence that distinguishes between warnings that are high versus low in legitimacy, credibility, or costliness, the high legitimacy messages seem to be the most effective of all the messages tested.

[…]

he coefficient plot in figure 4 shows the effect of sending any type of warning tweet on the ratio of tweets with hateful language over the tweets that a user tweets. The outcome variable is the ratio of hateful tweets over the total number of tweets that a user posted over the week and month following the treatment. The effects thus show the change in this ratio as a result of the treatment.

Figure 4 The effect of sending a warning tweet on reducing hateful language

Note: See table G1 in online appendix G for more details on sample size and control coefficients.

We find support for our first hypothesis: a tweet that warns a user of a potential suspension will lead that user to decrease their ratio of hateful tweets by 0.007 for a week after the treatment. Considering the fact that the average pre-treatment hateful tweet ratio is 0.07 in our sample, this means that a single warning tweet from a user with 100 followers reduced the use of hateful language by 10%.

[…]

The coefficient plot in figure 5 shows the effect of each treatment on the ratio of tweets with hateful language over the tweets that a user tweets. Although the differences across types are minor and thus caveats are warranted, the most effective treatment seems to be the high legitimacy tweet; the legitimacy category also has by far the largest difference between the high- and low-level versions of the three categories of treatment we assessed. Interestingly, the tweets emphasizing the cost of being suspended appear to be the least effective of the three categories; although the effects are in the correctly predicted direction, neither of the cost treatments alone are statistically distinguishable from null effects.

Figure 5 Reduction in hate speech by treatment type

Note: See table G2 in online appendix G for more details on sample size and control coefficients.

An alternative mechanism that could explain the similarity of effects across treatments—as well as the costliness channel apparently being the least effective—is that perhaps instead of deterring people, the warnings might have made them more reflective and attentive about their language use.

[…]

ur results show that only one warning tweet sent by an account with no more than 100 followers can decrease the ratio of tweets with hateful language by up to 10%, with some types of tweets (high legitimacy, emphasizing the legitimacy of the account sending the tweet) suggesting decreases of perhaps as high as 15%–20% in the week following treatment. Considering that we sent our tweets from accounts that have no more than 100 followers, the effects that we report here are conservative estimates, and could be more effective when sent from more popular accounts (Munger Reference Munger2017).

[…]

A recently burgeoning literature shows that online interventions can also decrease behaviors that could harm the other groups by tracking subjects’ behavior over social media. These works rely on online messages on Twitter that sanction the harmful behavior, and succeed in reducing hateful language (Munger Reference Munger2017; Siegel and Badaan Reference Siegel and Badaan2020), and mostly draw on identity politics when designing their sanctioning messages (Charnysh et al. Reference Charnysh, Lucas and Singh2015). We contribute to this recent line of research by showing that warning messages that are designed based on the literature of deterrence can lead to a meaningful decrease in the use of hateful language without leveraging identity dynamics.

[…]

Two options are worthy of discussion: relying on civil society or relying on Twitter. Our experiment was designed to mimic the former option, with our warnings mimicking non-Twitter employees acting on their own with the goal of reducing hate speech/protecting users from being suspended

[…]

hile it is certainly possible that an NGO or a similar entity could try to implement such a program, the more obvious solution would be to have Twitter itself implement the warnings.

[…]

the company reported “testing prompts in 2020 that encouraged people to pause and reconsider a potentially harmful or offensive reply—such as insults, strong language, or hateful remarks—before Tweeting it. Once prompted, people had an opportunity to take a moment and make edits, delete, or send the reply as is.”Footnote 15 This appears to result in 34% of those prompted electing either to review the Tweet before sending, or not to send the Tweet at all.

We note three differences from this endeavor. First, in our warnings, we try to reduce people’s hateful language after they employ hateful language, which is not the same thing as warning people before they employ hateful language. This is a noteworthy difference, which can be a topic for future research in terms of whether the dynamics of retrospective versus prospective warnings significantly differ from each other. Second, Twitter does not inform their users of the examples of suspensions that took place among the people that these users used to follow. Finally, we are making our data publicly available for re-analysis.

We stop short, however, of unambiguously recommending that Twitter simply implement the system we tested without further study because of two important caveats. First, one interesting feature of our findings is that across all of our tests (one week versus four weeks, different versions of the warning—figures 2 (in text) and A1(in the online appendix)) we never once get a positive effect for hate speech usage in the treatment group, let alone a statistically significant positive coefficient, which would have suggested a potential backlash effect whereby the warnings led people to become more hateful. We are reassured by this finding but do think it is an open question whether a warning from Twitter—a large powerful corporation and the owner of the platform—might provoke a different reaction. We obviously could not test for this possibility on our own, and thus we would urge Twitter to conduct its own testing to confirm that our finding about the lack of a backlash continues to hold when the message comes from the platform itself.Footnote 16

The second caveat concerns the possibility of Twitter making mistakes when implementing its suspension policies.

[…]

Despite these caveats, our findings suggest that hate-speech moderations can be effective without priming the salience of the target users’ identity. Explicitly testing the effectiveness of identity versus non-identity motivated interventions will be an important subject for future research.

Source: Short of Suspension: How Suspension Warnings Can Reduce Hate Speech on Twitter | Perspectives on Politics | Cambridge Core

GoDaddy Managed WordPress compromised, 1.2m peoples data exposed – sftp, ssl keys, admin passwords, etc

GoDaddy has admitted to America’s financial watchdog that one or more miscreants broke into its systems and potentially accessed a huge amount of customer data, from email addresses to SSL private keys.

In a filing on Monday to the SEC, the internet giant said that on November 17 it discovered an “unauthorized third-party” had been roaming around part of its Managed WordPress service, which essentially stores and hosts people’s websites.

[…]

Those infosec sleuths, we’re told, found evidence that an intruder had been inside part of GoDaddy’s website provisioning system, described by Comes as a “legacy code base,” since September 6, gaining access using a “compromised password.”

The miscreant was able to view up to 1.2 million customer email addresses and customer ID numbers, and the administrative passwords generated for WordPress instances when they were provisioned. Any such passwords unchanged since the break-in have been reset.

According to GoDaddy, the sFTP and database usernames and passwords of active user accounts were accessible, too, and these have been reset as well.

“For a subset of active customers, the SSL private key was exposed,” Comes added. “We are in the process of issuing and installing new certificates for those customers.” GoDaddy has not responded to a request for further details and exact numbers of users affected.

[…]

Source: GoDaddy Managed WordPress compromised, user data exposed • The Register