Boom Supersonic’s XB-1 demonstrator has broken the sound barrier, marking a major milestone in the effort that hopes to lead to a larger 55-seat supersonic airliner design known as Overture. Overall, the program could have significant implications not only for commercial aviation but also for the military.
Boom Supersonic’s XB-1 demonstrator eases past the sound barrier for the first time, going supersonic just over 11 minutes into its sortie today. YouTube screencap
The aircraft was flown to a speed of Mach 1.1 by former U.S. Navy aviator and Boom test pilot Tristan “Geppetto” Brandenburg, from the Mojave Air & Space Port, California. For the majority of its flight, the XB-1 was accompanied by two other supersonic jets, an ATAC Mirage F1 flown by A.J. “Face” McFarland, serving as primary safety chase, and a T-38 Talon performing photo chase duties. During the flight, the XB-1 entered the supersonic realm three times, landing safely at Mojave after a flight of a little over 30 minutes duration.
[…]
Ultimately, XB-1 is expected to have a top speed of around Mach 2.2 (1,687.99 miles per hour).
The XB-1, also known as the “Baby Boom,” is a one-third-scale technology demonstrator for the Overture. It made its first flight at Mojave on March 22, 2024, as you can read about here. During that flight, the XB-1 was flown at speeds up to 238 knots (273 mph, or Mach 0.355), achieving an altitude of 7,120 feet. On that occasion, Chief Test Pilot Bill “Doc” Shoemaker was at the controls, while the flight was monitored by “Geppetto” Brandenburg, flying a T-38 Talon chase aircraft.
[…]
While we have outlined the key aspects of the XB-1 in the past, the aircraft is 62.6 feet long and its elongated delta-wing planform has a wingspan of 21 feet. It makes extensive use of sophisticated technologies, including carbon-fiber composites, advanced avionics, and digitally optimized aerodynamics.
The XB-1 during an earlier test flight. Boom Supersonic
It also has an unusual propulsion system to propel it into the supersonic regime. This comprises three General Electric J85-15 turbojets, which together provide more than 12,000 pounds of thrust. The widely used J85 also powers, among others, the Northrop F-5 and the T-38. Since the XB-1 was rolled out, another three-engined aircraft has broken cover, the Chinese advanced tailless combat aircraft tentatively known as the J-36.
Compared to the XB-1, the Overture will be 201 feet long and is planned to achieve a cruising speed of Mach 1.7 (1,304 miles per hour) and a maximum speed of Mach 2.2. The company anticipates it will have a maximum range of 4,500 nautical miles.
A rendering of Boom Supersonic’s Overture airliner. Boom Supersonic
Achieving the Mach 1 mark is a huge achievement for the company and an important statement of intent for the future Overture supersonic airliner.
Aimed to make supersonic travel more affordable to greater numbers of travelers — a goal in which no other operator has succeeded in the past — the Overture is planned to carry a total of 64-80 passengers. Intended to drastically shorten the duration of transoceanic routes, the aircraft is “designed … to be profitable for airlines at fares similar to first and business class,” the company’s website notes.
There’s some good news to share for Pebble fans: The no-frills smartwatch is making a comeback. The Vergespoke to Pebble founder Eric Migicovsky today, who says he was able to convince Google to open-source the smartwatch’s operating system. Migicovsky is in the early stages of prototyping a new watch and spinning up a company again under a to-be-announced new name.
Founded back in 2012, Pebble was initially funded on Kickstarter and created smartwatches with e-ink displays that nailed the basics. They could display notifications, let users control their music, and last 5-7 days on a charge thanks to their displays that are akin to what you find on a Kindle. The watches came in at affordable prices too, and they could work across both iOS and Android.
[…]
Fans of Pebble will be happy to know that whatever new smartwatch Migicovsky releases, it will be almost identical to what came before. “We’re building a spiritual, not successor, but clone of Pebble,” he says, “because there’s not that much I actually want to change.” Migicovsky plans to keep the software open-source and allow anyone to customize it for their watches. “There’s going to be the ability for anyone who wants to, to take Pebble source code, compile it, run it on their Pebbles, build new Pebbles, build new watches. They could even use it in random other hardware. Who knows what people can do with it now?”
And of course, this time around Migicovsky is using his own capital to grow the company in a sustainable way. After leaving Pebble, he started a messaging startup called Beeper, which was acquired by WordPress developer Automattic. Migicovsky has also served as an investor at Y-Combinator.
It is unclear when Migicovsky’s first watch may be available, but updates will be shared at rePebble.com.
[…] While trying to fend off attacks on Section 215 collections (most of which are governed [in the loosest sense of the word] by the Third Party Doctrine), the NSA and its domestic-facing remora, the FBI, insisted collecting and storing massive amounts of phone metadata was no more a constitutional violation than it was a privacy violation.
FBI leaders have warned that they believe hackers who broke into AT&T Inc.’s system last year stole months of their agents’ call and text logs, setting off a race within the bureau to protect the identities of confidential informants, a document reviewed by Bloomberg News shows.
[…]
The data was believed to include agents’ mobile phone numbers and the numbers with which they called and texted, the document shows. Records for calls and texts that weren’t on the AT&T network, such as through encrypted messaging apps, weren’t part of the stolen data.
The agency (quite correctly!) believes the metadata could be used to identify agents, as well as their contacts and confidential sources. Of course it can.
[…]
The issue, of course, is that the Intelligence Community consistently downplayed this exact aspect of the bulk collection, claiming it was no more intrusive than scanning every piece of domestic mail (!) or harvesting millions of credit card records just because the Fourth Amendment (as interpreted by the Supreme Court) doesn’t say the government can’t.
There are real risks to real people who are affected by hacks like these. The same thing applies when the US government does it. It’s not just a bunch of data that’s mostly useless. Harvesting metadata in bulk allows the US government to do the same thing Chinese hackers are doing with it: identifying individuals, sussing out their personal networks, and building from that to turn numbers into adversarial actions — whether it’s the arrest of suspected terrorists or the further compromising of US government agents by hostile foreign forces.
The takeaway isn’t the inherent irony. It’s that the FBI and NSA spent years pretending the fears expressed by activists and legislators were overblown. Officials repeatedly claimed the information was of almost zero utility, despite mounting several efforts to protect this collection from being shut down by the federal government. In the end, the phone metadata program (at least as it applies to landlines) was terminated. But there’s more than a hint of egregious hypocrisy in the FBI’s sudden concern about how much can be revealed by “just” metadata.
[…] We’re still nowhere near understanding just how bad the Chinese hack of our phone system was. The incident that was only discovered last fall involved the Chinese hacking group Salt Typhoon, which used the US’s CALEA phone wiretapping system as a backdoor to gain incredible, unprecedented access to much of the US’s phone system “for months or longer.”
As details come out, the extent of the hackers’ access has become increasingly alarming. It is reasonable to call it the worst hack in US history.
Soon after it was discovered, Homeland Security tasked the Cyber Safety Review Board (CSRB) to lead an investigation into the hack to uncover what allowed it to happen and assess how bad it really was. The CSRB was established by Joe Biden to improve the government’s cybersecurity in the face of global cybersecurity attacks on our infrastructure and was made up of a mix of government and private sector cybersecurity experts.
And one of the first things Donald Trump did upon retaking the presidency was to dismantle the board, along with all other DHS Advisory Committees.
It’s one thing to say the new president should get to pick new members for these advisory boards, but it’s another thing altogether to just summarily dismiss the very board that is in the middle of investigating this hugely impactful hack of our telephone systems in a way that isn’t yet fully understood.
Just before the presidential switch, the Biden administration had announced sanctions against a Chinese front corporation that was connected to the hack. And while the details are still sparse, all indications are that this was a massive and damaging attack on critical US infrastructure.
And one of Trump’s moves is to disband the group of experts who was trying to get to the bottom of what happened.
Cybersecurity researcher Kevin Beaumont said on the social media platform Bluesky that the movewould giveMicrosoft a “free pass,” referring to the CSRB’s critical report of the tech giant — and Beaumont’s former employer — over itshandling of a prior Chinese hacker breach.
Jake Williams, faculty at IANS Research,went even furtheron the same website: “We should have been putting more resources into the CSRB, not dismantling it,”he wrote. “There’s zero doubt that killing the CSRB [would] hurt national security.”
While some have speculated that this move is an attempt to cover up the extent of the breach or even deliberately assist the Chinese, a more likely explanation is simple incompetence[…]
[…] As a reminder, Circle to Search is an AI-powered feature Google released at the start of last year. You can access it by long-pressing your phone’s home button and then circling something with your finger. At its most basic, the feature is a way to use Google Search from anywhere on your phone, with no need to switch between apps. It’s particularly useful if you want to conduct an image search since you don’t need to take a screenshot or describe what you’re looking at to Google.
As for those enhancements I mentioned, Google is adding one-tap actions for phone numbers, email addresses and URLs, meaning if Circle to Search detects those, it will allow you to call, email or visit a website with a single tap. Again, there’s no need to switch between apps to interact with those elements.[…]
About a year ago, security researcher Sam Curry bought his mother a Subaru, on the condition that, at some point in the near future, she let him hack it.
It took Curry until last November, when he was home for Thanksgiving, to begin examining the 2023 Impreza’s internet-connected features and start looking for ways to exploit them. Sure enough, he and a researcher working with him online, Shubham Shah, soon discovered vulnerabilities in a Subaru web portal that let them hijack the ability to unlock the car, honk its horn, and start its ignition, reassigning control of those features to any phone or computer they chose.
Most disturbing for Curry, though, was that they found they could also track the Subaru’s location—not merely where it was at the moment but also where it had been for the entire year that his mother had owned it. The map of the car’s whereabouts was so accurate and detailed, Curry says, that he was able to see her doctor visits, the homes of the friends she visited, even which exact parking space his mother parked in every time she went to church.
A year of location data for Sam Curry’s mother’s 2023 Subaru Impreza that Curry and Shah were able to access in Subaru’s employee admin portal thanks to its security vulnerabilities.
Screenshot Courtesy of Sam Curry
“You can retrieve at least a year’s worth of location history for the car, where it’s pinged precisely, sometimes multiple times a day,” Curry says. “Whether somebody’s cheating on their wife or getting an abortion or part of some political group, there are a million scenarios where you could weaponize this against someone.”
Curry and Shah today revealed in a blog post their method for hacking and tracking millions of Subarus, which they believe would have allowed hackers to target any of the company’s vehicles equipped with its digital features known as Starlink in the US, Canada, or Japan. Vulnerabilities they found in a Subaru website intended for the company’s staff allowed them to hijack an employee’s account to both reassign control of cars’ Starlink features and also access all the vehicle location data available to employees, including the car’s location every time its engine started, as shown in their video below.
Curry and Shah reported their findings to Subaru in late November, and Subaru quickly patched its Starlink security flaws. But the researchers warn that the Subaru web vulnerabilities are just the latest in a long series of similar web-based flaws they and other security researchers working with them have found that have affected well over a dozen carmakers, including Acura, Genesis, Honda, Hyundai, Infiniti, Kia, Toyota, and many others. There’s little doubt, they say, that similarly serious hackable bugs exist in other auto companies’ web tools that have yet to be discovered.
[…]
Last summer, Curry and another researcher, Neiko Rivera, demonstrated to WIRED that they could pull off a similar trick with any of millions of vehicles sold by Kia. Over the prior two years, a larger group of researchers, of which Curry and Shah are a part, discovered web-based security vulnerabilities that affected cars sold by Acura, BMW, Ferrari, Genesis, Honda, Hyundai, Infiniti, Mercedes-Benz, Nissan, Rolls Royce, and Toyota.
[…]
In December, information a whistleblower provided to the German hacker collective the Chaos Computer Computer and Der Spiegel revealed that Cariad, a software company that partners with Volkswagen, had left detailed location data for 800,000 electric vehicles publicly exposed online. Privacy researchers at the Mozilla Foundation in September warned in a report that “modern cars are a privacy nightmare,” noting that 92 percent give car owners little to no control over the data they collect, and 84 percent reserve the right to sell or share your information. (Subaru tells WIRED that it “does not sell location data.”)
“While we worried that our doorbells and watches that connect to the internet might be spying on us, car brands quietly entered the data business by turning their vehicles into powerful data-gobbling machines,” Mozilla’s report reads.
Someone has been quietly backdooring selected Juniper routers around the world in key sectors including semiconductor, energy, and manufacturing, since at least mid-2023.
The devices were infected with what appears to be a variant of cd00r, a publicly available “invisible backdoor” designed to operate stealthily on a victim’s machine by monitoring network traffic for specific conditions before activating.
It’s not yet publicly known how the snoops gained sufficient access to certain organizations’ Junos OS equipment to plant the backdoor, which gives them remote control over the networking gear. What we do know is that about half of the devices have been configured as VPN gateways.
Once injected, the backdoor, dubbed J-magic by Black Lotus Labs this week, resides in memory only and passively waits for one of five possible network packets to arrive. When one of those magic packet sequences is received by the machine, a connection is established with the sender, and a followup challenge is initiated by the backdoor. If the sender passes the test, they get command-line access to the box to commandeer it.
As Black Lotus Labs explained in this research note on Thursday: “Once that challenge is complete, J-Magic establishes a reverse shell on the local file system, allowing the operators to control the device, steal data, or deploy malicious software.”
While it’s not the first-ever discovered magic packet [PDF] malware, the team wrote, “the combination of targeting Junos OS routers that serve as a VPN gateway and deploying a passive listening in-memory-only agent, makes this an interesting confluence of tradecraft worthy of further observation.”
[…]
The malware creates an eBPF filter to monitor traffic to a specified network interface and port, and waits until it receives any of five specifically crafted packets from the outside world. If one of these magic packets – described in the lab’s report – shows up, the backdoor connects to whoever sent the magic packet using SSL; sends a random, five-character-long alphanumeric string encrypted using a hardcoded public RSA key to the sender; and if the sender can decrypt the string using the private half of the key pair and send it back to the backdoor to verify, the malware will start accepting commands via the connection to run on the box.
[…]
These victims span the globe, with the researchers documenting companies in the US, UK, Norway, the Netherlands, Russia, Armenia, Brazil, and Colombia. They included a fiber optics firm, a solar panel maker, manufacturing companies including two that build or lease heavy machinery, and one that makes boats and ferries, plus energy, technology, and semiconductor firms.
While most of the targeted devices were Juniper routers acting as VPN gateways, a more limited set of targeted IP addresses had an exposed NETCONF port, which is commonly used to help automate router configuration information and management.
This suggests the routers are part of a larger, managed fleet such as those in a network service provider, the researchers note.
Lockheed Martin says the stealthy F-35 Joint Strike Fighter now has a firmly demonstrated ability to act as an in-flight ‘quarterback’ for advanced drones like the U.S. Air Force’s future Collaborative Combat Aircraft (CCA) with the help of artificial intelligence-enabled systems. The company states that its testing has also shown a touchscreen tablet-like device is a workable interface for controlling multiple uncrewed aircraft simultaneously from the cockpit of the F-35, as well as the F-22 Raptor. For the U.S. Air Force, how pilots in crewed aircraft will actually manage CCAs during operations has emerged as an increasingly important question.
Details about F-35 and F-22 related crewed-uncrewed teaming developments were included in a press release that Lockheed Martin put out late yesterday, wrapping up various achievements for the company in 2024.
Lockheed Martin
The F-35 “has the capability to control drones, including the U.S. Air Force’s future fleet of Collaborative Combat Aircraft. Recently, Lockheed Martin and industry partners demonstrated end-to-end connectivity including the seamless integration of AI technologies to control a drone in flight utilizing the same hardware and software architectures built for future F-35 flight testing,” the press release states. “These AI-enabled architectures allow Lockheed Martin to not only prove out piloted-drone teaming capabilities, but also incrementally improve them, bringing the U.S. Air Force’s family of systems vision to life.”
“Lockheed Martin has demonstrated its piloted-drone teaming interface, which can control multiple drones from the cockpit of an F-35 or F-22,” the release adds. “This technology allows a pilot to direct multiple drones to engage enemies using a touchscreen tablet in the cockpit of their 5th Gen aircraft.”
A US Air Force image depicting an F-22 Raptor stealth fighter flying together with a Boeing MQ-28 Ghost Bat drone. USAF A US Air Force image depicting an MQ-28 Ghost Bat flying together with an F-22 Raptor stealth fighter. USAF
The press release also highlights prior crewed-uncrewed teaming work that Lockheed Martin’s famed Skunk Works advanced projects division has done with the University of Iowa’s Operator Performance Laboratory (OPL) using surrogate platforms. OPL has also been working with other companies, including Shield AI, as well as the U.S. military, to support advanced autonomy and drone development efforts in recent years.
In November 2024, Lockheed Martin notably announced it had conducted tests with OPL that saw a human controller in an L-39 Albatros jet use a touchscreen interface to order two L-29 Delfin jets, equipped with AI-enabled flight technology acting as surrogate drones, to engage simulated enemy fighters. This sounds very similar to the kind of control architecture the company says it has now demonstrated on the F-35.
A view of the “battle manager” at work in the back seat of the L-39 jet during issuing commands to the L-29s acting as surrogate drones. Lockheed Martin
[…]
The Air Force is also still very much in the process of developing new concepts of operations and tactics, techniques, and procedures for employing CCA drones operationally. How the drones will fit into the service’s force structure and be utilized in routine training and other day-to-day peacetime activities, along with what the maintenance and logistical demands will be, also remains to be seen. Questions about in-flight command and control have emerged as particularly important ones to answer in the near term.
[…]
As Lockheed Martin’s new touting of its work on tablet-based control interfaces highlights, there is a significant debate now just about how pilots will physically issue orders and otherwise manage drones from their cockpits.
A picture of a drone control system using a tablet-like device that General Atomics has previously released. GA-ASI
“There’s a lot of opinions amongst the Air Force about the right way to go [about controlling drones from other aircraft],” John Clark, then head of Skunk Works, also told The War Zone and others at the AFA gathering in September 2024. “The universal thought, though, is that this [a tablet or other touch-based interface] may be the fastest way to begin experimentation. It may not be the end state.”
“We’re working through a spectrum of options that are the minimum invasive opportunities, as well as something that’s more organically equipped, where there’s not even a tablet,” Clark added.
[…]
In addition, there are still many questions about the secure communications architectures that will be needed to support operations involving CCAs and similar drones, as well as for F-35s and F-22s to operate effectively in the airborne controller role. The F-35 could use the popular omnidirectional Link 16 network for this purpose, but doing so would make it easier for opponents to detect the fighter jet and the drone. The F-22, which has long only had the ability to transmit and not receive data via Link 16, faces similar issues.
It’s also worth noting here that the U.S. military has been publicly demonstrating the ability of tactical jets to actively control drones in mid-air for nearly a decade now, at least. In 2015, a U.S. Marine Corps AV-8B Harrier jump jet flew notably together with a Kratos Unmanned Tactical Aerial Platform-22 (UTAP-22) drone in testing that included “command and control through the tactical data link.” Other experimentation is known to have occurred across the U.S. military since then, and this doesn’t account for additional work in the classified domain.
Have you ever been in a group project where one person decided to take a shortcut, and suddenly, everyone ended up under stricter rules? That’s essentially what the EU is saying to tech companies with the AI Act: “Because some of you couldn’t resist being creepy, we now have to regulate everything.” This legislation isn’t just a slap on the wrist—it’s a line in the sand for the future of ethical AI.
Here’s what went wrong, what the EU is doing about it, and how businesses can adapt without losing their edge.
When AI Went Too Far: The Stories We’d Like to Forget
Target and the Teen Pregnancy Reveal
One of the most infamous examples of AI gone wrong happened back in 2012, when Target used predictive analytics to market to pregnant customers. By analyzing shopping habits—think unscented lotion and prenatal vitamins—they managed to identify a teenage girl as pregnant before she told her family. Imagine her father’s reaction when baby coupons started arriving in the mail. It wasn’t just invasive; it was a wake-up call about how much data we hand over without realizing it. (Read more)
Clearview AI and the Privacy Problem
On the law enforcement front, tools like Clearview AI created a massive facial recognition database by scraping billions of images from the internet. Police departments used it to identify suspects, but it didn’t take long for privacy advocates to cry foul. People discovered their faces were part of this database without consent, and lawsuits followed. This wasn’t just a misstep—it was a full-blown controversy about surveillance overreach. (Learn more)
The EU’s AI Act: Laying Down the Law
The EU has had enough of these oversteps. Enter the AI Act: the first major legislation of its kind, categorizing AI systems into four risk levels:
Minimal Risk: Chatbots that recommend books—low stakes, little oversight.
Limited Risk: Systems like AI-powered spam filters, requiring transparency but little more.
High Risk: This is where things get serious—AI used in hiring, law enforcement, or medical devices. These systems must meet stringent requirements for transparency, human oversight, and fairness.
Unacceptable Risk: Think dystopian sci-fi—social scoring systems or manipulative algorithms that exploit vulnerabilities. These are outright banned.
For companies operating high-risk AI, the EU demands a new level of accountability. That means documenting how systems work, ensuring explainability, and submitting to audits. If you don’t comply, the fines are enormous—up to €35 million or 7% of global annual revenue, whichever is higher.
Why This Matters (and Why It’s Complicated)
The Act is about more than just fines. It’s the EU saying, “We want AI, but we want it to be trustworthy.” At its heart, this is a “don’t be evil” moment, but achieving that balance is tricky.
On one hand, the rules make sense. Who wouldn’t want guardrails around AI systems making decisions about hiring or healthcare? But on the other hand, compliance is costly, especially for smaller companies. Without careful implementation, these regulations could unintentionally stifle innovation, leaving only the big players standing.
Innovating Without Breaking the Rules
For companies, the EU’s AI Act is both a challenge and an opportunity. Yes, it’s more work, but leaning into these regulations now could position your business as a leader in ethical AI. Here’s how:
Audit Your AI Systems: Start with a clear inventory. Which of your systems fall into the EU’s risk categories? If you don’t know, it’s time for a third-party assessment.
Build Transparency Into Your Processes: Treat documentation and explainability as non-negotiables. Think of it as labeling every ingredient in your product—customers and regulators will thank you.
Engage Early With Regulators: The rules aren’t static, and you have a voice. Collaborate with policymakers to shape guidelines that balance innovation and ethics.
Invest in Ethics by Design: Make ethical considerations part of your development process from day one. Partner with ethicists and diverse stakeholders to identify potential issues early.
Stay Dynamic: AI evolves fast, and so do regulations. Build flexibility into your systems so you can adapt without overhauling everything.
The Bottom Line
The EU’s AI Act isn’t about stifling progress; it’s about creating a framework for responsible innovation. It’s a reaction to the bad actors who’ve made AI feel invasive rather than empowering. By stepping up now—auditing systems, prioritizing transparency, and engaging with regulators—companies can turn this challenge into a competitive advantage.
The message from the EU is clear: if you want a seat at the table, you need to bring something trustworthy. This isn’t about “nice-to-have” compliance; it’s about building a future where AI works for people, not at their expense.
And if we do it right this time? Maybe we really can have nice things.
The wealth of the world’s billionaires grew by $2tn (£1.64tn) last year, three times faster than in 2023, amounting to $5.7bn (£4.7bn) a day, according to a report by Oxfam.
The latest inequality report from the charity reveals that the world is now on track to have five trillionaires within a decade, a change from last year’s forecast of one trillionaire within 10 years.
[…]
At the same time, the number of people living under the World Bank poverty line of $6.85 a day has barely changed since 1990, and is close to 3.6 billion – equivalent to 44% of the world’s population today, the charity said. One in 10 women lives in extreme poverty (below $2.15 a day), which means 24.3 million more women than men endure extreme poverty.
Oxfam warned that progress on reducing poverty has ground to a halt and that extreme poverty could be ended three times faster if inequality were to be reduced.
[…]
Rising share values on global stock exchanges account for most of the increase in billionaire wealth, though higher property values also played a role. Residential property accounts for about 80% of worldwide investments.
Globally, the number of billionaires rose by 204 last year to 2,769. Their combined wealth jumped from $13tn to $15tn in just 12 months – the second-largest annual increase since records began. The wealth of the world’s 10 richest men grew on average by almost $100m a day and even if they lost 99% of their wealth overnight, they would remain billionaires.
[…]
The report argues that most of the wealth is taken, not earned, as 60% comes from either inheritance, “cronyism and corruption” or monopoly power. It calculates that 18% of the wealth arises from monopoly power.
[…]
Anna Marriott, Oxfam’s inequality policy lead, said: “Last year we predicted the first trillionaire could emerge within a decade, but this shocking acceleration of wealth means that the world is now on course for at least five. The global economic system is broken, wholly unfit for purpose as it enables and perpetuates this explosion of riches, while nearly half of humanity continues to live in poverty.”
She called on the UK government to prioritise economic policies that bring down inequality, including higher taxation of the super-rich.
[…] In 2024, Bluesky grew from 2.89M users to 25.94M users. In addition to users hosted on Bluesky’s infrastructure, there are over 4,000 users running their own infrastructure (Personal Data Servers), self-hosting their content, posts, and data.
To meet the demands caused by user growth, we’ve increased our moderation team to roughly 100 moderators and continue to hire more staff. Some moderators specialize in particular policy areas, such as dedicated agents for child safety.
[…]
In 2024, users submitted 6.48M reports to Bluesky’s moderation service. That’s a 17x increase from the previous year — in 2023, users submitted 358K reports total. The volume of user reports increased with user growth and was non-linear, as the graph of report volume below shows:
Report volume in 2024
In late August, there was a large increase in user growth for Bluesky from Brazil, and we saw spikes of up to 50k reports per day. Prior to this, our moderation team handled most reports within 40 minutes. For the first time in 2024, we now had a backlog in moderation reports. To address this, we increased the size of our Portuguese-language moderation team, added constant moderation sweeps and automated tooling for high-risk areas such as child safety, and hired moderators through an external contracting vendor for the first time.
We already had automated spam detection in place, and after this wave of growth in Brazil, we began investing in automating more categories of reports so that our moderation team would be able to review suspicious or problematic content rapidly. In December, we were able to review our first wave of automated reports for content categories like impersonation. This dropped processing time for high-certainty accounts to within seconds of receiving a report, though it also caused some false positives. We’re now exploring the expansion of this tooling to other policy areas. Even while instituting automation tooling to reduce our response time, human moderators are still kept in the loop — all appeals and false positives are reviewed by human moderators.
Some more statistics: The proportion of users submitting reports held fairly stable from 2023 to 2024. In 2023, 5.6% of our active users1 created one or more reports. In 2024, 1.19M users made one or more reports, approximately 4.57% of our user base.
In 2023, 3.4% of our active users received one or more reports. In 2024, the number of users who received a report were 770K, comprising 2.97% of our user base.
The majority of reports were of individual posts, with a total of 3.5M reports. This was followed by account profiles at 47K reports, typically for a violative profile picture or banner photo. Lists received 45K reports. DMs received 17.7K reports. Significantly lower are feeds at 5.3K reports, and starter packs with 1.9K reports.
Our users report content for a variety of reasons, and these reports help guide our focus areas. Below is a summary of the reports we received, categorized by the reasons users selected. The categories vary slightly depending on whether a report is about an account or a specific post, but here’s the full breakdown:
Anti-social Behavior: Reports of harassment, trolling, or intolerance – 1.75M
Misleading Content: Includes impersonation, misinformation, or false claims about identity or affiliations – 1.20M
Spam: Excessive mentions, replies, or repetitive content – 1.40M
Unwanted Sexual Content: Nudity or adult content not properly labeled – 630K
Illegal or Urgent Issues: Clear violations of the law or our terms of service – 933K
Other: Issues that don’t fit into the above categories – 726K
In 2024, 93,076 users submitted at least one appeal in the app, for a total of 205K individual appeals. For most cases, the appeal was due to disagreement with label verdicts.
[…]
Legal Requests
In 2024, we received 238 requests from law enforcement, governments, legal firms, responded to 182, and complied with 146. The majority of requests came from German, U.S., Brazilian, and Japanese law enforcement.
[…]
Copyright / Trademark
In 2024, we received a total of 937 copyright and trademark cases. There were four confirmed copyright cases in the entire first half of 2024, and this number increased to 160 in September. The vast majority of cases occurred between September to December.
The following lines are especially interesting: Brazilians seem to be the type of people who really enjoy reporting on people and not only that, they like to assault or brigade specific users.
In late August, there was a large increase in user growth for Bluesky from Brazil, and we saw spikes of up to 50k reports per day.
In 2023, 5.6% of our active users1 created one or more reports. In 2024, 1.19M users made one or more reports, approximately 4.57% of our user base.
In 2023, 3.4% of our active users received one or more reports. In 2024, the number of users who received a report were 770K, comprising 2.97% of our user base.
In a write-up shared this month via Microsoft’s GitHub, Benjamin Flesch, a security researcher in Germany, explains how a single HTTP request to the ChatGPT API can be used to flood a targeted website with network requests from the ChatGPT crawler, specifically ChatGPT-User.
This flood of connections may or may not be enough to knock over any given site, practically speaking, though it’s still arguably a danger and a bit of an oversight by OpenAI. It can be used to amplify a single API request into 20 to 5,000 or more requests to a chosen victim’s website, every second, over and over again.
“ChatGPT API exhibits a severe quality defect when handling HTTP POST requests to https://chatgpt.com/backend-api/attributions,” Flesch explains in his advisory, referring to an API endpoint called by OpenAI’s ChatGPT to return information about web sources cited in the chatbot’s output. When ChatGPT mentions specific websites, it will call attributions with a list of URLs to those sites for its crawler to go access and fetch information about.
If you throw a big long list of URLs at the API, each slightly different but all pointing to the same site, the crawler will go off and hit every one of them at once.
[…]
Thus, using a tool like Curl, an attacker can send an HTTP POST request – without any need for an authentication token – to that ChatGPT endpoint and OpenAI’s servers in Microsoft Azure will respond by initiating an HTTP request for each hyperlink submitted via the urls[] parameter in the request. When those requests are directed to the same website, they can potentially overwhelm the target, causing DDoS symptoms – the crawler, proxied by Cloudflare, will visit the targeted site from a different IP address each time.
[…]
“I’d say the bigger story is that this API was also vulnerable to prompt injection,” he said, in reference to a separate vulnerability disclosure. “Why would they have prompt injection for such a simple task? I think it might be because they’re dogfooding their autonomous ‘AI agent’ thing.”
That second issue can be exploited to make the crawler answer queries via the same attributions API endpoint; you can feed questions to the bot, and it can answer them, when it’s really not supposed to do that; it’s supposed to just fetch websites.
Flesch questioned why OpenAI’s bot hasn’t implemented simple, established methods to properly deduplicate URLs in a requested list or to limit the size of the list, nor managed to avoid prompt injection vulnerabilities that have been addressed in the main ChatGPT interface.
A robotic hand exoskeleton can help expert pianists learn to play even faster by moving their fingers for them.
Robotic exoskeletons have long been used to rehabilitate people who can no longer use their hands due to an injury or medical condition, but using them to improve the abilities of able-bodied people has been less well explored.
Now, Shinichi Furuya at Sony Computer Science Laboratories in Tokyo and his colleagues have found that a robotic exoskeleton can improve the finger speed of trained pianists after a single 30-minute training session.
[…]
The robotic exoskeleton can raise and lower each finger individually, up to four times a second, using a separate motor attached to the base of each finger.
To test the device, the researchers recruited 118 expert pianists who had all played since before they had turned 8 years old and for at least 10,000 hours, and asked them to practise a piece for two weeks until they couldn’t improve.
Then, the pianists received a 30-minute training session with the exoskeleton, which moved the fingers of their right hand in different combinations of simple and complex patterns, either slowly or quickly, so that Furuya and his colleagues could pinpoint what movement type caused improvement.
The pianists who experienced the fast and complex training could better coordinate their right hand movements and move the fingers of either hand faster, both immediately after training and a day later. This, together with evidence from brain scans, indicates that the training changed the pianists’ sensory cortices to better control finger movements in general, says Furuya.
“This is the first time I’ve seen somebody use [robotic exoskeletons] to go beyond normal capabilities of dexterity, to push your learning past what you could do naturally,” says Nathan Lepora at the University of Bristol, UK. “It’s a bit counterintuitive why it worked, because you would have thought that actually performing the movements yourself voluntarily would be the way to learn, but it seems passive movements do work.”
Now it turns out that he not only did his big set of moderation changes to please Trump, but did so only after he was told by the incoming administration to act. Even worse, he reportedly made sure to share his plans with top Trump aides to get their approval first.
That’s a key takeaway from a new New York Times piece that is ostensibly a profile of the relentlessly awful Stephen Miller. However, it also has a few revealing details about the whole Zuckerberg saga buried within. First, Miller reportedly demanded that Zuckerberg make changes at Facebook “on Trump’s terms.”
Mr. Miller told Mr. Zuckerberg that he had an opportunity to help reform America, but it would be on President-elect Donald J. Trump’s terms. He made clear that Mr. Trump would crack down on immigration and go to war against the diversity, equity and inclusion, or D.E.I., culture that had been embraced by Meta and much of corporate America in recent years.
Mr. Zuckerberg was amenable. He signaled to Mr. Miller and his colleagues, including other senior Trump advisers, that he would do nothing to obstruct the Trump agenda, according to three people with knowledge of the meeting, who asked for anonymity to discuss a private conversation. Mr. Zuckerberg said he would instead focus solely on building tech products.
Even if you argue that this was more about DEI programs at Meta rather than about content moderation, it’s still the incoming administration reportedly making actual demands of Zuckerberg, and Zuckerberg not just saying “fine” but actually previewing the details to Miller to make sure they got Trump’s blessing.
Earlier this month, Mr. Zuckerberg’s political lieutenants previewed the changes to Mr. Miller in a private briefing. And on Jan. 10, Mr. Zuckerberg made them official….
This is especially galling given that it was just days ago when Zuckerberg was whining about how unfair it was that Biden officials were demanding stuff from him (even though he had no trouble saying no to them) and it was big news! The headlines made a huge deal of how unfair Biden was to Zuckerberg. Here’s just a sampling.
Also conveniently omitted was the fact that the Supreme Court found no evidence of the Biden administration going over the line in its conversations with Meta. Indeed, a Supreme Court Justice noted that conversations like those that the Biden admin had with Meta happened “thousands of times a day,” and weren’t problematic because there was no inherent threat or direct coordination.
Yet, here, we have reports of both threats and now evidence of direct coordination, including Zuckerberg asking for and getting direct approval from a top Trump official before rolling out the policy.
And where is this bombshell revelation? It’s buried in a random profile piece puffing up Stephen Miller.
It’s almost as if everyone now takes it for granted that any made-up story about Biden will be treated as fact, and everyone just takes it as expected when Trump actually does the thing that Biden gets falsely accused of.
With this new story, don’t hold your breath waiting for the same outlets to give this anywhere near the same level of coverage and outrage they directed at the Biden administration.
It’s almost as if there’s a massive double standard here: everything is okay if Trump does it, but we can blame the Biden admin for things we only pretend they did.
This was inevitable, ever since Donald Trump and the MAGA world freaked out when social media’s attempts to fact-check the President were deemed “censorship.” The reaction was both swift and entirely predictable. After all, how dare anyone question Dear Leader’s proclamations, even if they are demonstrably false? It wasn’t long before we started to see opinion pieces from MAGA folks breathlessly declaring that “fact-checking private speech is outrageous.” There were even politicians proposing laws to ban fact-checking.
In their view, the best way to protect free speech is apparently (?!?) to outlaw speech you don’t like.
With last week’s announcement by Mark Zuckerberg that Meta was ending its fact-checking program, the anti-fact-checking rhetoric hasn’t slowed down one bit.
So let’s be clear here: fact-checking is speech. Fact-checking is not censorship. It is protected by the First Amendment. Indeed, in olden times, when free speech supporters would talk about the “marketplace of ideas” and the “best response to bad speech is more speech,” they meant things like fact-checking. They meant that if someone were blathering on about utter nonsense, then a regime that enabled more speech could come along and fact-check folks.
There is no “censorship” involved in fact-checking. There is only a question of how others respond to the fact checks.
[…]
There’s a really fun game that the Post Editorial Board is playing here, pretending that they’re just fine with fact-checking, unless it leads to “silencing.”
The real issue, that is, isn’t the checking, it’s the silencing.
But what “silencing” ever actually happened due to fact-checking? And when was it caused by the government (which would be necessary for it to violate the First Amendment)? The answer is none.
The piece whines about a few NY Post articles that had limited reach on Facebook, but that’s Facebook’s own free speech as well, not censorship.
[…]
The Post goes on with this fun set of words:
Yes, the internet is packed with lies, misrepresentations and half-truths: So is all human conversation.
The only practical answer to false speech is and always been true speech; it doesn’t stop the liars or protect all the suckers, but most people figure it out well enough.
Shutting down debate in the name of “countering disinformation” only serves the liars with power or prestige or at least the right connections.
First off, the standard saying is that the response to false speech should be “more speech” not necessarily “true speech” but more to the point, uh, how do you get that “true speech”? Isn’t it… fact checking? And, if, as the NY Post suggests, the problem here is false speech in the fact checks, then shouldn’t the response be more speech in responserather than silencing the fact checkers?
I mean, their own argument isn’t even internally consistent.
They’re literally saying that we need more “truthful speech” and less “silencing of speech” while cheering on the silencing of organizations who try to provide more truthful speech.
[…] photonics, which offers lower energy consumption and reduced latency than electronics.
One of the most promising approaches is in-memory computing, which requires the use of photonic memories. Passing light signals through these memories makes it possible to perform operations nearly instantaneously. But solutions proposed for creating such memories have faced challenges such as low switching speeds and limited programmability.
Now, an international team of researchers has developed a groundbreaking photonic platform to overcome those limitations. Their findings were published in the journal Nature Photonics.
[…]
The researchers used a magneto-optical material, cerium-substituted yttrium iron garnet (YIG), the optical properties of which dynamically change in response to external magnetic fields. By employing tiny magnets to store data and control the propagation of light within the material, they pioneered a new class of magneto-optical memories. The innovative platform leverages light to perform calculations at significantly higher speeds and with much greater efficiency than can be achieved using traditional electronics.
This new type of memory has switching speeds 100 times faster than those of state-of-the-art photonic integrated technology. They consume about one-tenth the power, and they can be reprogrammed multiple times to perform different tasks. While current state-of-the-art optical memories have a limited lifespan and can be written up to 1,000 times, the team demonstrated that magneto-optical memories can be rewritten more than 2.3 billion times, equating to a potentially unlimited lifespan.
If you only remember two things about the government pressure campaign to influence Mark Zuckerberg’s content moderation decisions, make it these: Donald Trump directly threatened to throw Zuck in prison for the rest of his life, and just a couple months ago FCC Commissioner (soon to be FCC chair) Brendan Carr threatened Meta that if it kept on fact-checking stories in a way Carr didn’t like, he would try to remove Meta’s Section 230 protections in response.
Two months later — what do you know? — Zuckerberg ended all fact-checking on Meta. But when he went on Joe Rogan, rather than blaming those actual obvious threats, he instead blamed the Biden administration, because some admin officials sent angry emails… which Zuck repeatedly admits had zero impact on Meta’s actual policies.
[…]
this is a more simplified version of what happened, which can be summarized as: the actual threats came from the GOP, to which Zuckerberg quickly caved. The supposed threats from the Biden admin were overhyped, exaggerated, and misrepresented, and Zuck directly admits he was able to easily refuse those requests.
All the rest is noise.
[Here follows a long detailed unpacking of the Rogan interview]
None of that came up in this discussion. The only “government pressure” that Zuck talks about is from the Biden admin with “cursing,” which he readily admits they weren’t intimidated by.
So we have Biden officials who were, perhaps, mean, but not so threatening that Meta felt the need to bow down to them. And then we have Trump himself and leading members of his incoming administration who sent direct and obvious threats, which Zuck almost immediately bowed down to and caved.
And yet Rogan (and much of the media covering this podcast) claims he “revealed” how the Biden admin violated the First Amendment. Hell, the NY Post even ran an editorial pretending that Zuck didn’t go far enough because he didn’t reveal all of this in time for the Murthy case. And that’s only because the author doesn’t realize he literally is talking about the documents in the Murthy case.
The real story here is that Zuckerberg caved to Trump’s threats and felt fine pushing back on the Biden admin. Rogan at one point rants about how Trump will now protect Zuck because Trump “uniquely has felt the impact of not being able to have free speech.” That seems particularly ironic given the real story: Zuckerberg caved to Trump’s threats while pushing back on the Biden admin.
[…]
Strip away all the spin and misdirection, and the truth is inescapable: Zuckerberg folded like a cheap suit in the face of direct threats from Trump and his lackeys, while barely batting an eye at some sternly worded emails from Biden officials.
Google has told the EU it will not add fact checks to search results and YouTube videos or use them in ranking or removing content, despite the requirements of a new EU law, according to a copy of a letter obtained by Axios.
The big picture: Google has never included fact-checking as part of its content moderation practices. The company had signaled privately to EU lawmakers that it didn’t plan to change its practices, but it’s reaffirming its stance ahead of a voluntary code becoming law in the near future.
Zoom in: In a letter written to Renate Nikolay, the deputy director general under the content and technology arm at the European Commission, Google’s global affairs president Kent Walker said the fact-checking integration required by the Commission’s new Disinformation Code of Practice “simply isn’t appropriate or effective for our services” and said Google won’t commit to it.
The code would require Google to incorporate fact-check results alongside Google’s search results and YouTube videos. It would also force Google to build fact-checking into its ranking systems and algorithms.
Walker said Google’s current approach to content moderation works and pointed to successful content moderation during last year’s “unprecedented cycle of global elections” as proof.
He said a new feature added to YouTube last year that enables some users to add contextual notes to videos “has significant potential.” (That program is similar to X’s Community Notes feature, as well as new program announced by Meta last week.)
Catch up quick: The EU’s Code of Practice on Disinformation, introduced in 2022, includes several voluntary commitments that tech firms and private companies, including fact-checking organizations, are expected to deliver on.
The Code, originally created in 2018, predates the EU’s new content moderation law, the Digital Services Act (DSA), which went into effect in 2022.
State of play: The Commission has held private discussions over the past year with tech companies, urging them to convert the voluntary measures into an official code of conduct under the DSA.
Walker said in his letter Thursday that Google had already told the Commission that it didn’t plan to comply.
Google will “pull out of all fact-checking commitments in the Code before it becomes a DSA Code of Conduct,” he wrote.
He said Google will continue to invest in improvements to its current content moderation practices, which focus on providing people with more information about their search results through features like Synth ID watermarking and AI disclosures on YouTube.
Zoom out: The news comes amid a global reckoning about the role tech platforms should play in fact-checking and policing speech.
You can probably complete an amazing number of tasks with your hands without looking at them. But if you put on gloves that muffle your sense of touch, many of those simple tasks become frustrating. Take away proprioception — your ability to sense your body’s relative position and movement — and you might even end up breaking an object or injuring yourself.
[…]
Greenspon and his research collaborators recently published papers in Nature Biomedical Engineering and Science documenting major progress on a technology designed to address precisely this problem: direct, carefully timed electrical stimulation of the brain that can recreate tactile feedback to give nuanced “feeling” to prosthetic hands.
[…]
The researchers’ approach to prosthetic sensation involves placing tiny electrode arrays in the parts of the brain responsible for moving and feeling the hand. On one side, a participant can move a robotic arm by simply thinking about movement, and on the other side, sensors on that robotic limb can trigger pulses of electrical activity called intracortical microstimulation (ICMS) in the part of the brain dedicated to touch.
For about a decade, Greenspon explained, this stimulation of the touch center could only provide a simple sense of contact in different places on the hand.
“We could evoke the feeling that you were touching something, but it was mostly just an on/off signal, and often it was pretty weak and difficult to tell where on the hand contact occurred,” he said.
[…]
By delivering short pulses to individual electrodes in participants’ touch centers and having them report where and how strongly they felt each sensation, the researchers created detailed “maps” of brain areas that corresponded to specific parts of the hand. The testing revealed that when two closely spaced electrodes are stimulated together, participants feel a stronger, clearer touch, which can improve their ability to locate and gauge pressure on the correct part of the hand.
The researchers also conducted exhaustive tests to confirm that the same electrode consistently creates a sensation corresponding to a specific location.
“If I stimulate an electrode on day one and a participant feels it on their thumb, we can test that same electrode on day 100, day 1,000, even many years later, and they still feel it in roughly the same spot,” said Greenspon, who was the lead author on this paper.
[…]
The complementary Science paper went a step further to make artificial touch even more immersive and intuitive. The project was led by first author Giacomo Valle, PhD, a former postdoctoral fellow at UChicago who is now continuing his bionics research at Chalmers University of Technology in Sweden.
“Two electrodes next to each other in the brain don’t create sensations that ’tile’ the hand in neat little patches with one-to-one correspondence; instead, the sensory locations overlap,” explained Greenspon, who shared senior authorship of this paper with Bensmaia.
The researchers decided to test whether they could use this overlapping nature to create sensations that could let users feel the boundaries of an object or the motion of something sliding along their skin. After identifying pairs or clusters of electrodes whose “touch zones” overlapped, the scientists activated them in carefully orchestrated patterns to generate sensations that progressed across the sensory map.
Participants described feeling a gentle gliding touch passing smoothly over their fingers, despite the stimulus being delivered in small, discrete steps. The scientists attribute this result to the brain’s remarkable ability to stitch together sensory inputs and interpret them as coherent, moving experiences by “filling in” gaps in perception.
The approach of sequentially activating electrodes also significantly improved participants’ ability to distinguish complex tactile shapes and respond to changes in the objects they touched. They could sometimes identify letters of the alphabet electrically “traced” on their fingertips, and they could use a bionic arm to steady a steering wheel when it began to slip through the hand.
These advancements help move bionic feedback closer to the precise, complex, adaptive abilities of natural touch, paving the way for prosthetics that enable confident handling of everyday objects and responses to shifting stimuli.
[…]
“We hope to integrate the results of these two studies into our robotics systems, where we have already shown that even simple stimulation strategies can improve people’s abilities to control robotic arms with their brains,” said co-author Robert Gaunt, PhD, associate professor of physical medicine and rehabilitation and lead of the stimulation work at the University of Pittsburgh.
Greenspon emphasized that the motivation behind this work is to enhance independence and quality of life for people living with limb loss or paralysis.
In a pre-print paper titled “Novel AI Camera Camouflage: Face Cloaking Without Full Disguise,” David Noever, chief scientist, and Forrest McKee, data scientist, describe their efforts to baffle face recognition systems through the minimal application of makeup and manipulation of image files.
Noever and McKee recount various defenses that have been proposed against facial recognition systems, including CV Dazzle, which creates asymmetries using high-contrast makeup, adversarial attack graphics that confuse algorithms, and Juggalo makeup, which can be used to obscure jaw and cheek detection.
And of course, there are masks, which have the advantage of simplicity and tend to be reasonably effective regardless of the facial recognition algorithm being used.
But as the authors observe, these techniques draw attention.
“While previous efforts, such as CV Dazzle, adversarial patches, and Juggalo makeup, relied on bold, high-contrast modifications to disrupt facial detection, these approaches often suffer from two critical limitations: their theatrical prominence makes them easily recognizable to human observers, and they fail to address modern face detectors trained on robust key-point models,” they write.
“In contrast, this study demonstrates that effective disruption of facial recognition can be achieved through subtle darkening of high-density key-point regions (e.g., brow lines, nose bridge, and jaw contours) without triggering the visibility issues inherent to overt disguises.”
Image from the pre-print depicting man’s face with Darth Maul-style makeup … Click to enlarge
The research focuses on two areas: applying minimal makeup to fool Haar cascade classifiers – used for object detection in machine learning, and hiding faces in image files by manipulating the alpha transparency layer in a way that keeps faces visible to human observers but conceals them from specific reverse image search systems like BetaFaceAPI and Microsoft Bing Visual Search.
[…]
“Despite a lot of research, masks remain one of the few surefire ways of evading these systems [for now],” she said. “However, gait recognition is becoming quite powerful, and it’s also unclear if this will supplant face recognition. It is harder to imagine practical and effective evasion strategies against this technology.”
Are you one of the many reconsidering their relationship with Meta’s Instagram? If you’re looking for another outlet to share your photos, Pixelfed, an open source photo-sharing alternative without ads or tracking, has officially launched mobile apps for Android and iPhone.
Pixelfed, like Mastodon, is part of the Fediverse, meaning people on Mastodon can follow accounts on Pixelfed, and vice-versa. It also means that signing up can be a little confusing to Fediverse neophytes: When getting set up, you will need to choose a server in order to share photos and follow other users. The biggest server, pixelfed.social, is currently lagging due to a large influx of new users, so it’s worth considering the other options presented in the app itself (or browse this directory).
Remember when Instagram was fun?
Decentralization is interesting and laudable, sure, but the thing I like best about Pixelfed is that it feels like a return to Instagram’s glory days. As you might recall, Instagram used to be a photo sharing service. Yes, you technically can still share photos on Instagram, but it’s been a long time since that was the primary focus of the application. Your timeline, once filled with photos from people you follow, is these days dominated by ads and “recommended” videos from celebrities and strangers.
Despite some recent changes to give you back a little more control, Insta is also ruled by the algorithm, which means that when you post a photo, there’s less of a chance that your friends will actually see it. Because of this, the people you care about are probably posting fewer photos than they used to, which in turn frees up the algorithm to put more random videos in your timeline. It’s enough to make you wonder why anyone still uses the service—it’s certainly not for the reasons they signed up for it.
The Pixelfed mobile app, in contrast to Instagram’s current incarnation, is simple. You can scroll through the photos posted by people you’ve chosen to follow. You can see the most popular photos on your server, or the entire Fediverse. Or you can upload photos. These new applications technically aren’t the first Pixelfed apps—there were plenty of third party applications that could access it, and those still exist. But now there’s also an official app, and it works pretty well.
Credit: Justin Pot
Another thing that’s missing from Pixelfed: ads and any kind of tracking. The developer team promises those “features” are never coming. In a Mastodon post, developer Daniel Supernault said “Pixelfed is a lot of things, but one thing it is not, is an opportunity for VC or others to ruin the vibe,” adding that he turned down VC funding and plans to never add ads. “Pixelfed is for the people, period.”
I’m not the kind of tech journalist who likes to make predictions about which applications will “win.” I will say, though, that the internet would be a better place if that mentality were more common.
I don’t know if anyone else has noticed this but everything seems to be going down the tubes quite fast. And not fun tubes, like at a waterpark. The “ending in shit” kind. The issues are complicated, the reasons diverse, but there are a few culprits who have been making themselves extremely visible.
Alongside those holding political office, tech gragillionnaires (I had to invent a new number) like Elon Musk and Mark Zuckerberg obviously wield huge global influence with their computers and numbers and whatnot. There has been a lot written about them and there will be more, as they continue to shape the world and win favour with Donald Trump. Big, scary, probably ruinous things lie ahead. But I’m here to discuss the smaller part. The insult to injury, the sprinkling of salt in the wound.
Whether I am engaging with the news, or with Musk tweeting constantly like a man with no job or friends, or with Zuckerberg sending out weird videos and appearing on Rogan, I am in pain. Not just because I don’t like what they are doing but because they are so incredibly, painfully cringe.
I knew that one day we might have to watch as capitalism and greed and bigotry led to a world where powerful men, deserving or not, would burn it all down. What I didn’t expect, and don’t think I could have foreseen, is how incredibly cringe it would all be. I have been prepared for evil, for greed, for cruelty, for injustice – but I did not anticipate that the people in power would also be such huge losers.
[…]
Musk’s clear desperation, even as he holds this much wealth and power in his hands, to be thought of as cool. There are endless examples of him embarrassing himself while attempting to be funny or to gain respect. Unfortunately, while you may be able to buy power, it’s impossible to buy a good personality. Watching his Nigel-no-friends attempts to be popular, his endless pathetic tweets that read as though they come from the brain of an 11-year-old poser, has made me start to believe we should bring back bullying. If yet another humiliating report in the last couple of days is to be believed, he appears even to have lost the respect of some of his gamer audience, who the report claims suspect that he may have been lying about his achievements in hardcore gaming (cursed sentence).
Zuckerberg is a different kind of cringe – but cringe all the same. His cringe moments drip through more sparingly but, when they do, my body tries to turn inside out at my bellybutton. His physical makeover for Maga reasons, performing music because no one will stop him, trying to look cool on a surfboard – all these are extremely difficult to watch. He has been trying to suck up to Trump, going on Joe Rogan’s show to say society has been “neutered” and companies need “more masculine energy”.
Putting on what is clearly a bro disguise to join the boys’ club and sit at the big boy table – it should feel humiliating. This came as Zuckerberg rolled back hate speech and factchecking rules at Meta, in a clear swerve to the right before Trump’s inauguration. What could be more masculine and cool than selling out vulnerable communities and women to impress the alpha male?
Climate crises keep coming, genocides continue, women keep getting murdered, art is being strangled to death by AI, bigotry is on the rise, social progress is being rolled back … AND these men insist on being cringe? It’s a rotten cherry on top. This combination of evil and embarrassment is a unique horror, one that science fiction has failed to prepare us for. The second-hand embarrassment we have to endure gets even more potent when combined with other modern influences on young men, like Jordan Peterson and Andrew Tate.
Peterson is a big voice in men’s rights – well, a small Kermit’s voice in men’s rights – and he’s also an embarrassment. So much so that he has his own Know Your Meme page, which covers that time hereportedly retweeted an image from a fetish film, apparently believing it was a Chinese communist “sperm extraction” facility. He deleted it shortly afterwards.
Tate is facing human trafficking charges but rose to fame as a voice for young men, a misogynist in bad outfits who does really cool things like smoking cigars, wearing sunnies inside and trying to drag human rights back 100 years.
Living your life to impress other men by hating women is one of the most embarrassing things I can imagine. Looking up to any of these men for how to live your life is even sadder.
I’ve worked hard to keep these kinds of men out of my personal life, to keep them away from me, out of my goddamn sight. Now they are in my face daily, not only influencing the world for the worse but making me nauseous at how uncool and pathetic they are, on top of their other sins. It’s too much, I can’t take it, there needs to be a change.
It’s time for us to start getting revenge on the nerds.
Social media platform Meta has confirmed that its fact-checking feature on Facebook, Instagram and Threads will only be removed in the US for now, according to a Jan. 13 letter sent to Brazil’s government.
“Meta has already clarified that, at this time, it is terminating its independent Fact-Checking Program only in the United States, where we will test and refine the community notes [feature] before expanding to other countries,” Meta told Brazil’s Attorney General of the Union (AGU) in a Portuguese-translated letter.
Meta’s letter followed a 72-hour deadline Brazil’s AGU set for Meta to clarify to whom the removal of the third-party fact verification feature would apply.
It comes after Meta announced on Jan. 7 that it would remove the feature to ensure more “freedom of expression” on its platforms — as part of a broader effort to comply with corporate human rights policies.
Meta’s fact-checking program will be replaced with a community notes feature — similar to the one on Elon Musk’s X — in the US to strike a better balance between freedom of expression and security, Mark Zuckerberg’s company explained to Brazil’s AGU.
It acknowledged that abusive forms of freedom of expression might ensue and cause harm and already has automated systems in place that will identify and handle high-severity violations on its platforms — from terrorism and child sexual exploitation to fraud, scams and drug matters.
However, Brazil has expressed dissatisfaction with Meta’s removal of its fact check feature, Brazil Attorney-General Jorge Messias said on Jan. 10.
“Brazil has rigorous legislation to protect children and adolescents, vulnerable populations, and the business environment, and we will not allow these networks to transform the environment into digital carnage or barbarity.”
Does anyone actually believe the shit Zuckerberg is pushing? It’s a great way to save money. Lots of money. And kowtow to the incoming Oligarch in chief.
In 2019, the names used by the USB Implementor Forum’s engineering teams to describe the various speeds of USB got leaked, and the backlash (including our own) was harsh. Names like “USB 3.2 Gen 2” mean nothing to consumers — but neither do marketing-style terms, such as “SuperSpeed USB 10Gbps.”
It’s the latter speed-only designation that became the default standard, where users cared less about numerical gobbledygook and more about just how fast a cable was. (Our reviews simply refer to the port by its shape, such as USB-A, and its speed, such as 5Gbps.) In 2022, the USB world settled upon an updated logo scheme that basically cut out everything but the speed of the device or cable.
Thankfully, the USB-IF has taken the extra step and extended its logo scheme to the latest versions of the USB specification, including USB4. It also removes “USB4v2” from consumer branding.
USB-IF
If you’re buying a USB4 or USB4v2 docking station, you’ll simply see a “USB 80Gbps” or “USB 40Gbps” logo on the side of the box now. While it may be a little disconcerting to see a new logo like this, at least you’ll know exactly what you’re buying.
This is a welcome move on several fronts. For one, USB-C ports typically go unlabeled on PCs, so you can’t be sure whether the USB-C port is an older 10Gbps port or a more modern USB4 or Thunderbolt port. (Thunderbolt 4 and USB4v2 are essentially identical, though Intel has its own certification process. Thunderbolt ports aren’t identified by speed, either.) USB-IF representatives told me that they’d heard a rumor that Dell would begin identifying its ports like the primary image above.
The USB-IF is also applying common-sense logos to cables, too, informing users what its throughput and power transmission capabilities are.
Finally, the updated USB logos will also apply to cables. Jeff Ravencraft, president of the USB-IF, said that was done to clearly communicate the only things consumers cared about: what data speeds the cable supported and how much power it could pass between two devices.
Decentralized social network organization Mastodon said Monday that it is planning to create a new nonprofit organization in Europe and hand over ownership of entities responsible for key Mastodon ecosystem and platform components. This means one person won’t have control over the entire project. The organization is trying to differentiate itself from social networks controlled by CEOs like Elon Musk and Mark Zuckerberg.
While exact details are yet to be finalized, this means that Mastodon’s current CEO and creator, Eugen Rochko, will hand over management bits of the organization to the new entity and focus on the product strategy.
The organization said that it will continue to host the mastodon.social and mastodon.online servers, which users can sign up for and join the ActivityPub-based network. Mastodon currently has 835,000 monthly active users spread across thousands of servers.
While Mastodon relies on donations and sponsorships to operate, Bluesky, its main competitor working on a rival decentralized network, is reportedly raising a new funding round from investors at a $700 million valuation.
The blog post noted that the new Europe-based nonprofit entity will wholly own the Mastodon GmbH for-profit entity. The organization is in the process of finalizing the place where the new entity will be set up.
[…]
In the past few months, the ownership of open source projects has been a recurring news subject. For instance, people have questioned control of certain WordPress community projects being in the hands of WordPress’s co-creator Matt Mullenweg. Mastodon is trying to avoid situations where only one person has decision-making powers with today’s new structure.