Inheritance, “cronyism and corruption” or monopoly power grows billionaire wealth in 2024 in second-largest annual increase since records began

The wealth of the world’s billionaires grew by $2tn (£1.64tn) last year, three times faster than in 2023, amounting to $5.7bn (£4.7bn) a day, according to a report by Oxfam.

The latest inequality report from the charity reveals that the world is now on track to have five trillionaires within a decade, a change from last year’s forecast of one trillionaire within 10 years.

[…]

At the same time, the number of people living under the World Bank poverty line of $6.85 a day has barely changed since 1990, and is close to 3.6 billion – equivalent to 44% of the world’s population today, the charity said. One in 10 women lives in extreme poverty (below $2.15 a day), which means 24.3 million more women than men endure extreme poverty.

Oxfam warned that progress on reducing poverty has ground to a halt and that extreme poverty could be ended three times faster if inequality were to be reduced.

[…]

Rising share values on global stock exchanges account for most of the increase in billionaire wealth, though higher property values also played a role. Residential property accounts for about 80% of worldwide investments.

Globally, the number of billionaires rose by 204 last year to 2,769. Their combined wealth jumped from $13tn to $15tn in just 12 months – the second-largest annual increase since records began. The wealth of the world’s 10 richest men grew on average by almost $100m a day and even if they lost 99% of their wealth overnight, they would remain billionaires.

[…]

The report argues that most of the wealth is taken, not earned, as 60% comes from either inheritance, “cronyism and corruption” or monopoly power. It calculates that 18% of the wealth arises from monopoly power.

[…]

Anna Marriott, Oxfam’s inequality policy lead, said: “Last year we predicted the first trillionaire could emerge within a decade, but this shocking acceleration of wealth means that the world is now on course for at least five. The global economic system is broken, wholly unfit for purpose as it enables and perpetuates this explosion of riches, while nearly half of humanity continues to live in poverty.”

She called on the UK government to prioritise economic policies that bring down inequality, including higher taxation of the super-rich.

[…]

Source: Wealth of world’s billionaires grew by $2tn in 2024, report finds | The super-rich | The Guardian

Bluesky 2024 Moderation Report shows 17x more user content reports with 10x user growth fed by Brazilians serial complainers

[…] In 2024, Bluesky grew from 2.89M users to 25.94M users. In addition to users hosted on Bluesky’s infrastructure, there are over 4,000 users running their own infrastructure (Personal Data Servers), self-hosting their content, posts, and data.

To meet the demands caused by user growth, we’ve increased our moderation team to roughly 100 moderators and continue to hire more staff. Some moderators specialize in particular policy areas, such as dedicated agents for child safety.

[…]

In 2024, users submitted 6.48M reports to Bluesky’s moderation service. That’s a 17x increase from the previous year — in 2023, users submitted 358K reports total. The volume of user reports increased with user growth and was non-linear, as the graph of report volume below shows:

2024 report volumeReport volume in 2024
In late August, there was a large increase in user growth for Bluesky from Brazil, and we saw spikes of up to 50k reports per day. Prior to this, our moderation team handled most reports within 40 minutes. For the first time in 2024, we now had a backlog in moderation reports. To address this, we increased the size of our Portuguese-language moderation team, added constant moderation sweeps and automated tooling for high-risk areas such as child safety, and hired moderators through an external contracting vendor for the first time.

We already had automated spam detection in place, and after this wave of growth in Brazil, we began investing in automating more categories of reports so that our moderation team would be able to review suspicious or problematic content rapidly. In December, we were able to review our first wave of automated reports for content categories like impersonation. This dropped processing time for high-certainty accounts to within seconds of receiving a report, though it also caused some false positives. We’re now exploring the expansion of this tooling to other policy areas. Even while instituting automation tooling to reduce our response time, human moderators are still kept in the loop — all appeals and false positives are reviewed by human moderators.

Some more statistics: The proportion of users submitting reports held fairly stable from 2023 to 2024. In 2023, 5.6% of our active users1 created one or more reports. In 2024, 1.19M users made one or more reports, approximately 4.57% of our user base.

In 2023, 3.4% of our active users received one or more reports. In 2024, the number of users who received a report were 770K, comprising 2.97% of our user base.

The majority of reports were of individual posts, with a total of 3.5M reports. This was followed by account profiles at 47K reports, typically for a violative profile picture or banner photo. Lists received 45K reports. DMs received 17.7K reports. Significantly lower are feeds at 5.3K reports, and starter packs with 1.9K reports.

Our users report content for a variety of reasons, and these reports help guide our focus areas. Below is a summary of the reports we received, categorized by the reasons users selected. The categories vary slightly depending on whether a report is about an account or a specific post, but here’s the full breakdown:

  • Anti-social Behavior: Reports of harassment, trolling, or intolerance – 1.75M
  • Misleading Content: Includes impersonation, misinformation, or false claims about identity or affiliations – 1.20M
  • Spam: Excessive mentions, replies, or repetitive content – 1.40M
  • Unwanted Sexual Content: Nudity or adult content not properly labeled – 630K
  • Illegal or Urgent Issues: Clear violations of the law or our terms of service – 933K
  • Other: Issues that don’t fit into the above categories – 726K

[…]

The top human-applied labels were:

  • Sexual-figurative3 – 55,422
  • Rude – 22,412
  • Spam – 13,201
  • Intolerant – 11,341
  • Threat – 3,046

Appeals

In 2024, 93,076 users submitted at least one appeal in the app, for a total of 205K individual appeals. For most cases, the appeal was due to disagreement with label verdicts.

[…]

Legal Requests

In 2024, we received 238 requests from law enforcement, governments, legal firms, responded to 182, and complied with 146. The majority of requests came from German, U.S., Brazilian, and Japanese law enforcement.

[…]

Copyright / Trademark

In 2024, we received a total of 937 copyright and trademark cases. There were four confirmed copyright cases in the entire first half of 2024, and this number increased to 160 in September. The vast majority of cases occurred between September to December.

[…]

Source: Bluesky 2024 Moderation Report – Bluesky

The following lines are especially interesting: Brazilians seem to be the type of people who really enjoy reporting on people and not only that, they like to assault or brigade specific users.

In late August, there was a large increase in user growth for Bluesky from Brazil, and we saw spikes of up to 50k reports per day.

In 2023, 5.6% of our active users1 created one or more reports. In 2024, 1.19M users made one or more reports, approximately 4.57% of our user base.

In 2023, 3.4% of our active users received one or more reports. In 2024, the number of users who received a report were 770K, comprising 2.97% of our user base.

ChatGPT crawler flaw opens door to DDoS, prompt injection

In a write-up shared this month via Microsoft’s GitHub, Benjamin Flesch, a security researcher in Germany, explains how a single HTTP request to the ChatGPT API can be used to flood a targeted website with network requests from the ChatGPT crawler, specifically ChatGPT-User.

This flood of connections may or may not be enough to knock over any given site, practically speaking, though it’s still arguably a danger and a bit of an oversight by OpenAI. It can be used to amplify a single API request into 20 to 5,000 or more requests to a chosen victim’s website, every second, over and over again.

“ChatGPT API exhibits a severe quality defect when handling HTTP POST requests to https://chatgpt.com/backend-api/attributions,” Flesch explains in his advisory, referring to an API endpoint called by OpenAI’s ChatGPT to return information about web sources cited in the chatbot’s output. When ChatGPT mentions specific websites, it will call attributions with a list of URLs to those sites for its crawler to go access and fetch information about.

If you throw a big long list of URLs at the API, each slightly different but all pointing to the same site, the crawler will go off and hit every one of them at once.

[…]

Thus, using a tool like Curl, an attacker can send an HTTP POST request – without any need for an authentication token – to that ChatGPT endpoint and OpenAI’s servers in Microsoft Azure will respond by initiating an HTTP request for each hyperlink submitted via the urls[] parameter in the request. When those requests are directed to the same website, they can potentially overwhelm the target, causing DDoS symptoms – the crawler, proxied by Cloudflare, will visit the targeted site from a different IP address each time.

[…]

“I’d say the bigger story is that this API was also vulnerable to prompt injection,” he said, in reference to a separate vulnerability disclosure. “Why would they have prompt injection for such a simple task? I think it might be because they’re dogfooding their autonomous ‘AI agent’ thing.”

That second issue can be exploited to make the crawler answer queries via the same attributions API endpoint; you can feed questions to the bot, and it can answer them, when it’s really not supposed to do that; it’s supposed to just fetch websites.

Flesch questioned why OpenAI’s bot hasn’t implemented simple, established methods to properly deduplicate URLs in a requested list or to limit the size of the list, nor managed to avoid prompt injection vulnerabilities that have been addressed in the main ChatGPT interface.

[…]

Source: ChatGPT crawler flaw opens door to DDoS, prompt injection • The Register