Imgur To Ban Nudity, Sexual or otherwise unsettling Content Next Month

Online image hosting service Imgur is updating its Terms of Service on May 15th to prohibit nudity and sexually explicit content, among other things. The news arrived in an email sent to “Imgurians”. The changes have since been outlined on the company’s “Community Rules” page, which reads: Imgur welcomes a diverse audience. We don’t want to create a bad experience for someone that might stumble across explicit images, nor is it in our company ethos to support explicit content, so some lascivious or sexualized posts are not allowed. This may include content containing:

– the gratuitous or explicit display of breasts, butts, and sexual organs intended to stimulate erotic feelings
– full or partial nudity
– any depiction of sexual activity, explicit or implied (drawings, print, animated, human, or otherwise)
– any image taken of or from someone without their knowledge or consent for the purpose of sexualization
– solicitation (the uninvited act of directly requesting sexual content from another person, or selling/offering explicit content and/or adult services)

Content that might be taken down may includes: see-thru clothing, exposed or clearly defined genitalia, some images of female nipples/areolas, spread eagle poses, butts in thongs or partially exposed buttocks, close-ups, upskirts, strip teases, cam shows, sexual fluids, private photos from a social media page, or linking to sexually explicit content. Sexually explicit comments that don’t include images may also be removed.

Artistic, scientific or educational nude images shared with educational context may be okay here. We don’t try to define art or judge the artistic merit of particular content. Instead, we focus on context and intent, as well as what might make content too explicit for the general community. Any content found to be sexualizing and exploiting minors will be removed and, if necessary, reported to the National Center for Missing & Exploited Children (NCMEC). This applies to photos, videos, animated imagery, descriptions and sexual jokes concerning children. The company is also prohibiting hate speech, abuse or harassment, content that condones illegal or violent activity, gore or shock content, spam or prohibited behavior, content that shares personal information, and posts in general that violate Imgur’s terms of service. Meanwhile, “provocative, inflammatory, unsettling, or suggestive content should be marked as Mature,” says Imgur.

Source: Imgur To Ban Nudity Or Sexually Explicit Content Next Month – Slashdot

Wow, the Americans have really gotten into prudery and are going back to medieval times if they feel the need to do this. You would have thought the Michaelangelo statue thing would have maybe had them thinking about how strange this all is but no. And this from the country that brought you the summer of love, Playboy and Penthouse.

Posted in Sex

Medusa ransomware crew boasts of Microsoft Bing and Cortana code leak

The Medusa ransomware gang has put online what it claims is a massive leak of internal Microsoft materials, including Bing and Cortana source code.

“This leak is of more interest to programmers, since it contains the source codes of the following Bing products, Bing Maps and Cortana,” the crew wrote on its website, which was screenshotted and shared by Emsisoft threat analyst Brett Callow.

“There are many digital signatures of Microsoft products in the leak. Many of them have not been recalled,” the gang continued. “Go ahead and your software will be the same level of trust as the original Microsoft product.”

Obviously, this could be a dangerous level of trust to give miscreants developing malware. Below is Callow’s summary of the purported dump of source code presumable obtained or stolen somehow from Microsoft.

To be clear: we don’t know if the files are legit. Microsoft didn’t respond to The Register‘s request for comment, and ransomware gangs aren’t always the most trustworthy sources of information.

“At this point, it’s unclear whether the data is what it’s claimed to be,” Emsisoft’s Callow told The Register. “Also unclear is whether there’s any connection between Medusa and Lapsus$ but, with hindsight, certain aspects of their modus operandi does have a somewhat Lapsus$ish feel.”

He’s referring to a March 2022 security breach in which Lapsus$ claimed it broke into Microsoft’s internal DevOps environment and stole, then leaked, about 37GB of information including what the extortionists claimed to be Bing and Cortana’s internal source code, and WebXT compliance engineering projects.

Microsoft later confirmed Lapsus$ had compromised its systems, and tried to downplay the intrusion by insisting “no customer code or data was involved in the observed activities.”

“Microsoft does not rely on the secrecy of code as a security measure and viewing source code does not lead to elevation of risk,” it added, which is a fair point. Software should be and can be made secure whether its source is private or open.

And Lapsus$, of course, is the possibly extinct extortion gang led by teenagers who went on a cybercrime spree last year before the arrest of its alleged ringleaders. Before that, however, it stole data from Nvidia, Samsung, Okta, and others.

It could be that Medusa is spreading around stuff that was already stolen and leaked.

[…]

Source: Medusa ransomware crew boasts of Microsoft code leak • The Register

Why Video Editors are Switching to DaVinci Resolve in Droves

Video editors are flocking to DaVinci Resolve in droves, marking a major paradigm shift in the editing landscape that we haven’t seen since the dreadful launch of Final Cut Pro X drove users to Adobe Premiere Pro.

[…]

More a conglomeration of tools than a single program, Resolve came through some acquisitions Blackmagic made when creating a broadcast and cine ecosystem.

Comprised of an editing tool, a color correction tool, an audio editor, and an effects tool, Resolve is essentially multiple programs that all integrate so seamlessly that they function as a single application.

The color correction tools in Resolve are particularly well regarded, and many films and shows were color graded in Resolve even if they were edited in another program. The same applies to Fairlight, the audio component of Resolve, the go-tool tool for many of Hollywood’s most prominent audio engineers.

In 2011, Blackmagic decided to release Resolve as both a paid and a free version. The free version had fewer features than the full version (as it still does), but instead of being crippled, the free version works well enough for most users, with the paid version feeling like a feature upgrade.

[…]

There are a few key differences between the free and Studio version. Studio supports more video formats (and completes 4Kp60 workflows), uses the GPU more efficiently, has more effects, and fully supports the product’s audio, color, and effects tools.

It’s not the price alone that has caused a mass adoption of the program, though. It’s the company’s approach to updates as well.

Features

Blackmagic has never hesitated to put a feature into Resolve. The program has many options in contextual menus, user interface choices, menu items, keyboard shortcuts, and more.

There is so much here that it can be overwhelming. Finding the tool I want in a contextual menu is often the most challenging part of my editing. But if there’s something that can be done in video editing, a button, icon, or menu will probably perform the task.

Blackmagic also releases dot-versions (like 18.1) that sometimes add enough features that it acts like a full number upgrade would if it were released by Adobe or Apple. Some of the features in Resolve 18.1, for example, unleashed the wave of recent switchers.

Two significant features are buried in a list of around 20 new features in that update. The first is AI-driven Magic Mask tools that make masking people or objects a matter of drawing a line. The other prominent feature is voice isolation, another AI-based feature that removes noises from dialog tracks.

Magic Mask alone is worth the price of admission. This tool makes it easy to color-correct significant portions of a shot without doing endless mask adjustments, and it also allows for instant alpha channel creation, allowing for items like text, graphics or even people to be superimposed on the same scene without needing a green screen.

In noisy environments, this tool performs amazingly. I’ve used it to eliminate leaf blowers and lawnmowers in the background of outdoor shoots, and I’ve seen it used to cancel out hair dryers and drill guns in sample videos on some channels.

[…]

The Speed Editor costs $295 and comes with a Resolve Studio license, making it worth the cost even if you barely use it.

The Blackmagic Speed Edit deck is an excellent piece of hardware, though many functions are out of my league. Buttons are arranged where a seasoned editor would. Cinematographers, especially those working on multi-cam shoots, will benefit from this editing.

Or at least that’s why my seasoned editor friend tells me. The unit feels odd in my hands because I don’t use most of the keys. One central portion of the Speed Editor is dedicated to switching between up to nine cameras, but the device has encouraged me to do more multi-cam shoots since the keyboard makes editing smooth.

The keyboard, which connects via USB-C cable or Bluetooth, is labeled with the essential editing functions, which is very helpful for new Resolve users. Instead of memorizing the location of essential keys on a standard keyboard, new users can look at the Speed Editor and focus on learning editing workflow instead of shortcuts.

On the other hand, many seasoned editors already know all the keyboard shortcuts on a standard keyboard and have made their custom keyboard configurations to support their editing style. Even though I’m a new Resolve editor, many tasks are performed the same as Final Cut, so I moved toward the regular keyboard shortcuts.

The Speed Editor is an excellent example of the complete Blackmagic ecosystem, which is why the free program and Studio are low-cost.

[…]

: Just after finishing this article, Blackmagic announced a new version of Resolve, which adds several compelling features including transcriptions, subtitles, and the ability to edit clips by selecting text.

[…]

Source: Why Video Editors are Switching to DaVinci Resolve in Droves | PetaPixel

Scientists Identify Mind-Body Nexus In Human Brain

An anonymous reader quotes a report from Reuters: Researchers said on Wednesday they have discovered that parts of the brain region called the motor cortex that govern body movement are connected with a network involved in thinking, planning, mental arousal, pain, and control of internal organs, as well as functions such as blood pressure and heart rate. They identified a previously unknown system within the motor cortex manifested in multiple nodes that are located in between areas of the brain already known to be responsible for movement of specific body parts — hands, feet and face — and are engaged when many different body movements are performed together.

The researchers called this system the somato-cognitive action network, or SCAN, and documented its connections to brain regions known to help set goals and plan actions. This network also was found to correspond with brain regions that, as shown in studies involving monkeys, are connected to internal organs including the stomach and adrenal glands, allowing these organs to change activity levels in anticipation of performing a certain action. That may explain physical responses like sweating or increased heart rate caused by merely pondering a difficult future task, they said. “Basically, we now have shown that the human motor system is not unitary. Instead, we believe there are two separate systems that control movement,” said radiology professor Evan Gordon of the Washington University School of Medicine in St. Louis, lead author of the study.

“One is for isolated movement of your hands, feet and face. This system is important, for example, for writing or speaking -movements that need to involve only the one body part. A second system, the SCAN, is more important for integrated, whole body movements, and is more connected to high-level planning regions of your brain,” Gordon said.

“Modern neuroscience does not include any kind of mind-body dualism. It’s not compatible with being a serious neuroscientist nowadays. I’m not a philosopher, but one succinct statement I like is saying, ‘The mind is what the brain does.’ The sum of the bio-computational functions of the brain makes up ‘the mind,'” said study senior author Nico Dosenbach, a neurology professor at Washington University School of Medicine. “Since this system, the SCAN, seems to integrate abstract plans-thoughts-motivations with actual movements and physiology, it provides additional neuroanatomical explanation for why ‘the body’ and ‘the mind’ aren’t separate or separable.”

The findings have been published in the journal Nature.

Source: Scientists Identify Mind-Body Nexus In Human Brain – Slashdot

💡 Pause AI Doomster Pessimism: An Open Letter – a call on AI doomsters to immediately pause for at least 6 months the alarmism that is hurting human progress.

AI systems with human-competitive intelligence can offer significant benefits to society and humanity, as demonstrated by extensive research and acknowledged by top AI labs. Advanced AI has the potential to revolutionize the way we live, work, and interact with one another, and it should be welcomed and guided with optimism and foresight. Regrettably, recent months have seen growing pessimism and alarmism about AI development, despite the immense potential benefits.

Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Can we leverage machines to enhance our information channels with accurate and valuable insights? Can we automate mundane tasks to free up time for more fulfilling and meaningful pursuits? Can we develop nonhuman minds that might complement, augment, and collaborate with us? Can we harness AI to help solve pressing global issues? Such decisions should be made collectively, in a spirit of cooperation and with a focus on the greater good.

To counteract the pessimism and alarmism, we call on all stakeholders to immediately pause for at least 6 months their doomsday thinking and shift their focus to the potential benefits of AI. This pause should be public and verifiable, and include all key actors. Governments should support and encourage AI development that benefits all of humanity.

Problems with AI shouldn’t be ignored. AI labs and independent experts should work together to jointly develop and implement a set of shared safety protocols for advanced AI design and development. While doing so, it is essential to continue focusing on the potential benefits of AI development, as they promise to bring transformative advancements to various aspects of our lives.

[…]

Source: 💡 Pause AI Doomster Pessimism: An Open Letter

Absolutely agree!

Undercutting Microsoft, Amazon Offers Free Access to Its AI Coding Assistant ‘CodeWhisperer’

Amazon is making its AI-powered coding assistant CodeWhisperer free for individual developers, reports the Verge, “undercutting the $10 per month pricing of its Microsoft-made rival.” Amazon launched CodeWhisperer as a preview last year, which developers can use within various integrated development environments (IDEs), like Visual Studio Code, to generate lines of code based on a text-based prompt….

CodeWhisperer automatically filters out any code suggestions that are potentially biased or unfair and flags any code that’s similar to open-source training data. It also comes with security scanning features that can identify vulnerabilities within a developer’s code, while providing suggestions to help close any security gaps it uncovers. CodeWhisperer now supports several languages, including Python, Java, JavaScript, TypeScript, and C#, including Go, Rust, PHP, Ruby, Kotlin, C, C++, Shell scripting, SQL, and Scala.
Here’s how Amazon’s senior developer advocate pitched the usefulness of their “real-time AI coding companion”: Helping to keep developers in their flow is increasingly important as, facing increasing time pressure to get their work done, developers are often forced to break that flow to turn to an internet search, sites such as StackOverflow, or their colleagues for help in completing tasks. While this can help them obtain the starter code they need, it’s disruptive as they’ve had to leave their IDE environment to search or ask questions in a forum or find and ask a colleague — further adding to the disruption. Instead, CodeWhisperer meets developers where they are most productive, providing recommendations in real time as they write code or comments in their IDE. During the preview we ran a productivity challenge, and participants who used CodeWhisperer were 27% more likely to complete tasks successfully and did so an average of 57% faster than those who didn’t use CodeWhisperer….

It provides additional data for suggestions — for example, the repository URL and license — when code similar to training data is generated, helping lower the risk of using the code and enabling developers to reuse it with confidence.

Source: Undercutting Microsoft, Amazon Offers Free Access to Its AI Coding Assistant ‘CodeWhisperer’ – Slashdot

Pacific garbage patch providing a deep ocean home for coastal species

A survey of plastic waste picked up in the North Pacific Subtropical Gyre—aka the Giant Pacific Garbage Patch—has revealed that the garbage is providing a home to species that would otherwise not be found in the deep ocean. Over two-thirds of the trash examined plays host to coastal marine species, many of which are clearly reproducing in what would otherwise be a foreign habitat.

The findings suggest that, as far as coastal species are concerned, there was nothing inhospitable about the open ocean other than the lack of something solid to latch on to.

[…]

To find out whether that was taking place, the researchers collected over 100 plastic debris items from the North Pacific Subtropical Gyre in late 2018/early 2019. While a handful of items could be assigned to either Asian or North American origins, most were pretty generic, such as rope and fishing netting. There was a wide variety of other items present, including bottles, crates, buckets, and household items. Some had clearly eroded significantly since their manufacture, suggesting they had been in the ocean for years.

Critically, nearly all of them had creatures living on them.

Far from home

Ninety-eight percent of the items found had some form of invertebrate living on them. In almost all cases, that included species found in the open ocean (just shy of 95 percent of the plastic). But a handful had nothing but coastal species present. And over two-thirds of the items had a mixed population of coastal and open-ocean species.

While the open-ocean species were found on more items, the researchers tended to find the same species repeatedly. That isn’t surprising, given that species adapted for a sedentary existence near the surface are infrequent in that environment. By contrast, there was far more species diversity among the coastal species that had hitched a ride out into the deeps. All told, coastal species accounted for 80 percent of the 46 taxonomic richness represented by the organisms identified.

On a per-item basis, species richness was low, with an average of only four species per item. This suggests that the primary barrier to a species colonizing an item is simply the low probability of finding it in the first place.

Significantly, the coastal species were breeding. In a number of cases, the researchers were able to identify females carrying eggs; in others, it was clear that the individuals present had a wide range of sizes, suggesting they were at different stages of maturity. Many of the species that were reproducing do so asexually, which simplifies the issue of finding a mate. Also common was a developmental pathway that skips larval stages. For many species, the larval stage is free-ranging, which would make them unlikely to re-colonize the same hunk of plastic.

The species that seemed to do best were often omnivores, or engaged in grazing or filter feeding, all options that are relatively easy to pursue without leaving the piece of plastic they called home.

A distinct ecology

One thing that struck the researchers was that the list of species present on the plastic of the North Pacific Subtropical Gyre was distinct from that found on tsunami debris. Part of that may be that some items swept across the ocean by the tsunami, like docks and boats, already had established coastal communities on them when they were lost to the sea.

[…]

With the possible exception of fishing gear and buoys, however, these plastic items likely picked up their inhabitants while passing through coastal ecosystems that were largely intact. So the colonization of these items likely represents a distinct—and ongoing—ecological process.

It also has the potential to have widespread effects on coastal ecology. While the currents that create the North Pacific Subtropical Gyre largely trap items within the Gyre, it is home to island habitats that could potentially be colonized. And it is possible that some items can cross oceans without being caught in a gyre, potentially making exchanges between coasts a relatively common occurrence in the age of plastics.

Finally, the researchers caution against a natural tendency to think of these plastic-borne coastal species as “misplaced species in an unsuitable habitat.” Instead, it appears that they are well suited to life in the open ocean as long as there’s something there that they can latch on to.

Nature Ecology & Evolution, 2023. DOI: 10.1038/s41559-023-01997-y  (About DOIs).

Source: Pacific garbage patch providing a deep ocean home for coastal species | Ars Technica

International Partners Publish Secure-by-Design and -Default Principles and Approaches   Guide – but don’t link to guide in press release

 The Cybersecurity and Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI), the National Security Agency (NSA), and the cybersecurity authorities of Australia, Canada, United Kingdom, Germany, Netherlands, and New Zealand (CERT NZ, NCSC-NZ

) published today “Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Security-by-Design and -Default.” This joint guidance urges software manufacturers to take urgent steps necessary to ship products that are secure-by-design and -default.  To create a future where technology and associated products are safe for customers, the authoring agencies urge manufacturers to revamp their design and development programs to permit only secure-by-design and -default products to be shipped to customers.

This guidance, the first of its kind, is intended to catalyze progress toward further investments and cultural shifts necessary to achieve a safe and secure future. In addition to specific technical recommendations, this guidance outlines several core principles to guide software manufacturers in building software security into their design processes prior to developing, configuring, and shipping their products, including:

  • Take ownership of the security outcomes of their technology products, shifting the burden of security from the customers. A secure configuration should be the default baseline, in which products automatically enable the most important security controls needed to protect enterprises from malicious cyber actors.
  • Embrace radical transparency and accountability—for example, by ensuring vulnerability advisories and associated common vulnerability and exposure (CVE) records are complete and accurate.
  • Build the right organizational structure by providing executive level commitment for software manufacturers to prioritize security as a critical element of product development.

[…]

With this joint guide, the authoring agencies seek to progress an international conversation about key priorities, investments, and decisions necessary to achieve a future where technology is safe, secure, and resilient by design and default. Feedback on this guide is welcome and can be sent to SecureByDesign@cisa.dhs.gov.

Source: U.S. and International Partners Publish Secure-by-Design and -Default Principles and Approaches    | CISA

Not having the guide linked in the press release means people have to search for it, which means it’s a great target for an attack. Not really secure at all!

So I have the link to the PDF guide, it’s here.

The AI Doomers’ Playbook

I have posted on this a few times and to me it’s shocking to see these fabricated sci-fi doomsday predictions about AI. AI / ML is a tool which we use, just like video games (that don’t cause violence in kids), roleplaying games (which don’t cause satanism), a telephone (which yes, can be used in planning crimes but most usually isn’t – and the paper post is the same), search engines (which can be used to search up how to make explosives but most usually aren’t), knives (which can be used to stab people but are most usually found in a food setting). This isn’t to say that the use of tools shouldn’t be regulated. Dinner knives have a certain maximum size. Video games and books with hate and violence inducing content are censored. Phone calls can be tapped and post opened if there is probable cause. Search engines can be told not to favour products the parent company owns. And the EU AI act is a good step on the way to ensuring that AI tools aren’t dangerous.

The technology is still a long long way off from an AI being smart enough to be at all evil and planet destroying.

Below is an excellent run through of some of the biggest AI doomerists and what they mean, how their self interest is served by being doomerist.

AI Doomerism is becoming mainstream thanks to mass media, which drives our discussion about Generative AI from bad to worse, or from slightly insane to batshit crazy. Instead of out-of-control AI, we have out-of-control panic.

When a British tabloid headline screams, “Attack of the psycho chatbot,” it’s funny. When it’s followed by another front-page headline, “Psycho killer chatbots are befuddled by Wordle,” it’s even funnier. If this type of coverage stayed in the tabloids, which are known to be sensationalized, that was fine.

But recently, prestige news outlets have decided to promote the same level of populist scaremongering: The New York Times published “If we don’t master AI, it will master us” (by Harari, Harris & Raskin), and TIME magazine published “Be willing to destroy a rogue datacenter by airstrike” (by Yudkowsky).

In just a few days, we went from “governments should force a 6-month pause” (the petition from the Future of Life Institute) to “wait, it’s not enough, so data centers should be bombed.” Sadly, this is the narrative that gets media attention and shapes our already hyperbolic AI discourse.

In order to understand the rise of AI Doomerism, here are some influential figures responsible for mainstreaming doomsday scenarios. This is not the full list of AI doomers, just the ones that recently shaped the AI panic cycle (so I‘m focusing on them).

AI Panic Marketing: Exhibit A: Sam Altman.

Sam Altman has a habit of urging us to be scared. “Although current-generation AI tools aren’t very scary, I think we are potentially not that far away from potentially scary ones,” he tweeted. “If you’re making AI, it is potentially very good, potentially very terrible,” he told the WSJ. When he shared the bad-case scenario of AI with Connie Loizo, it was ”lights out for all of us.”

In an interview with Kara Swisher, Altman expressed how he is “super-nervous” about authoritarians using this technology.” He elaborated in an ABC News interview: “A thing that I do worry about is … we’re not going to be the only creator of this technology. There will be other people who don’t put some of the safety limits that we put on it. I’m particularly worried that these models could be used for large-scale disinformation.” These models could also “be used for offensive cyberattacks.” So, “people should be happy that we are a little bit scared of this.” He repeated this message in his following interview with Lex Fridman: “I think it’d be crazy not to be a little bit afraid, and I empathize with people who are a lot afraid.”

Having shared this story in 2016, it shouldn’t come as a surprise: “My problem is that when my friends get drunk, they talk about the ways the world will END.” One of the “most popular scenarios would be A.I. that attacks us.” “I try not to think about it too much,” Altman continued. “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”

(Wouldn’t it be easier to just cut back on the drinking and substance abuse?).

Altman’s recent post “Planning for AGI and beyond” is as bombastic as it gets: “Successfully transitioning to a world with superintelligence is perhaps the most important – and hopeful, and scary – project in human history.”

It is at this point that you might ask yourself, “Why would someone frame his company like that?” Well, that’s a good question. The answer is that making OpenAI’s products “the most important and scary – in human history” is part of its marketing strategy. “The paranoia is the marketing.”

AI doomsaying is absolutely everywhere right now,” described Brian Merchant in the LA Times. “Which is exactly the way that OpenAI, the company that stands to benefit the most from everyone believing its product has the power to remake – or unmake – the world, wants it.” Merchant explained Altman’s science fiction-infused marketing frenzy: “Scaring off customers isn’t a concern when what you’re selling is the fearsome power that your service promises.”

During the Techlash days in 2019, which focused on social media, Joseph Bernstein explained how the alarm over disinformation (e.g., “Cambridge Analytica was responsible for Brexit and Trump’s 2016 election”) actually “supports Facebook’s sales pitch”:

What could be more appealing to an advertiser than a machine that can persuade anyone of anything?”

This can be applied here: The alarm over AI’s magic power (e.g., “replacing humans”) actually “supports OpenAI’s sales pitch”:

“What could be more appealing to future AI employees and investors than a machine that can become superintelligence?”

AI Panic as a Business. Exhibit A & B: Tristan Harris & Eliezer Yudkowsky.

Altman is at least using apocalyptic AI marketing for actual OpenAI products. The worst kind of doomers is those whose AI panic is their product, their main career, and their source of income. A prime example is the Effective Altruism institutes that claim to be the superior few who can save us from a hypothetical AGI apocalypse.

In March, Tristan Harris, Co-Founder of the Center for Humane Technology, invited leaders to a lecture on how AI could wipe out humanity. To begin his doomsday presentation, he stated: “What nukes are to the physical world … AI is to everything else.”

Steven Levy summarized that lecture at WIRED, saying, “We need to be thoughtful as we roll out AI. But hard to think clearly if it’s presented as the apocalypse.” Apparently, after the “Social Dilemma” has been completed, Tristan Harris is now working on the AI Dilemma. Oh boy. We can guess how it’s going to look (The “nobody criticized bicycles” guy will make a Frankenstein’s monster/Pandora’s box “documentary”).

In the “Social Dilemma,” he promoted the idea that “Two billion people will have thoughts that they didn’t intend to have” because of the designers’ decisions. But, as Lee Visel pointed out, Harris didn’t provide any evidence that social media designers actually CAN purposely force us to have unwanted thoughts.

Similarly, there’s no need for evidence now that AI is worse than nuclear power; simply thinking about this analogy makes it true (in Harris’ mind, at least). Did a social media designer force him to have this unwanted thought? (Just wondering).

To further escalate the AI panic, Tristan Harris published an OpEd in The New York Times with Yuval Noah Harari and Aza Raskin. Among their overdramatic claims: “We have summoned an alien intelligence,” “A.I. could rapidly eat the whole human culture,” and AI’s “godlike powers” will “master us.”

Another statement in this piece was, “Social media was the first contact between A.I. and humanity, and humanity lost.” I found it funny as it came from two men with hundreds of thousands of followers (@harari_yuval 540.4k, @tristanharris 192.6k), who use their social media megaphone … for fear-mongering. The irony is lost on them.

“This is what happens when you bring together two of the worst thinkers on new technologies,” added Lee Vinsel. “Among other shared tendencies, both bloviate free of empirical inquiry.”

This is where we should be jealous of AI doomers. Having no evidence and no nuance is extremely convenient (when your only goal is to attack an emerging technology).

Then came the famous “Open Letter.” This petition from the Future of Life Institute lacked a clear argument or a trade-off analysis. There were only rhetorical questions, like, should we develop imaginary “nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us?“ They provided no evidence to support the claim that advanced LLMs pose an unprecedented existential risk. There were a lot of highly speculative assumptions. Yet, they demanded an immediate 6-month pause on training AI systems and argued that “If such a pause cannot be enacted quickly, governments should institute a moratorium.

Please keep in mind that (1). A $10 million donation from Elon Musk launched the Future of Life Institute in 2015. Out of its total budget of 4 million euros for 2021, Musk Foundation contributed 3.5 million euros (the biggest donor by far). (2). Musk once said that “With artificial intelligence, we are summoning the demon.” (3). Due to this, the institute’s mission is to lobby against extinction, misaligned AI, and killer robots.

“The authors of the letter believe they are superior. Therefore, they have the right to call a stop, due to the fear that less intelligent humans will be badly influenced by AI,” responded Keith Teare (CEO SignalRank Corporation). “They are taking a paternalistic view of the entire human race, saying, ‘You can’t trust these people with this AI.’ It’s an elitist point of view.”

“It’s worth noting the letter overlooked that much of this work is already happening,” added

Spencer Ante (Meta Foresight). “Leading providers of AI are taking AI safety and responsibility very seriously, developing risk-mitigation tools, best practices for responsible use, monitoring platforms for misuse, and learning from human feedback.”

Next, because he thought the open letter didn’t go far enough, Eliezer Yudkowsky took “PhobAI” too far. First, Yudkowsky asked us all to be afraid of made-up risks and an apocalyptic fantasy he has about “superhuman intelligence” “killing literally everyone” (or “kill everyone in the U.S. and in China and on Earth”). Then, he suggested that “preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange.” By explicitly advocating violent solutions to AI, we have officially reached the height of hysteria.

Rhetoric from AI doomers is not just ridiculous. It’s dangerous and unethical,” responded Yann Lecun (Chief AI Scientist, Meta). “AI doomism is quickly becoming indistinguishable from an apocalyptic religion. Complete with prophecies of imminent fire and brimstone caused by an omnipotent entity that doesn’t actually exist.”

“You stand a far greater chance of dying from lightning strikes, collisions with deer, peanut allergies, bee stings & ignition or melting of nightwear – than you do from AI,” Michael Shermer wrote to Yudkowsky. “Quit stoking irrational fears.”

The problem is that “irrational fears” sell. They are beneficial to the ones who spread them.

How to Spot an AI Doomer?

On April 2nd, Gary Marcus asked: “Confused about the terminology. If I doubt that robots will take over the world, but I am very concerned that a massive glut of authoritative-seeming misinformation will undermine democracy, do I count as a “doomer”?

One of the answers was: “You’re a doomer as long as you bypass participating in the conversation and instead appeal to populist fearmongering and lobbying reactionary, fearful politicians with clickbait.”

Considering all of the above, I decided to define “AI doomer” and provide some criteria:

How to spot an AI Doomer?

  • Making up fake scenarios in which AI will wipe out humanity
  • Don’t even bother to have any evidence to back up those scenarios
  • Watched/read too much sci-fi
  • Says that due to AI’s God-like power, it should be stopped
  • Only he (& a few “chosen ones”) can stop it
  • So, scared/hopeless people should support his endeavor ($)

Then, Adam Thierer added another characteristic:

  • Doomers tend to live in a tradeoff-free fantasy land.

Doomers have a general preference for very amorphous, top-down Precautionary Principle-based solutions, but they (1) rarely discuss how (or if) those schemes would actually work in practice, and (2) almost never discuss the trade-offs/costs their extreme approaches would impose on society/innovation.

Answering Gary Marcus’ question, I do not think he qualifies as a doomer. You need to meet all criteria (he does not). Meanwhile, Tristan Harris and Eliezer Yudkowsky meet all seven.

Are they ever going to stop this “Panic-as-a-Business”? If the apocalyptic catastrophe doesn’t occur, will the AI doomers ever admit they were wrong? I believe the answer is “No.”

Doomsday cultists don’t question their own predictions. But you should.

Dr. Nirit Weiss-Blatt (@DrTechlash) is the author of The Techlash and Tech Crisis Communication

Source: The AI Doomers’ Playbook | Techdirt

 

Need To Pick Objects Out Of Images? Segment Anything Does Exactly That

Segment Anything, recently released by Facebook Research, does something that most people who have dabbled in computer vision have found daunting: reliably figure out which pixels in an image belong to an object. Making that easier is the goal of the Segment Anything Model (SAM), just released under the Apache 2.0 license.

The online demo has a bank of examples, but also works with uploaded images.

The results look fantastic, and there’s an interactive demo available where you can play with the different ways SAM works. One can pick out objects by pointing and clicking on an image, or images can be automatically segmented. It’s frankly very impressive to see SAM make masking out the different objects in an image look so effortless. What makes this possible is machine learning, and part of that is the fact that the model behind the system has been trained on a huge dataset of high-quality images and masks, making it very effective at what it does.

Once an image is segmented, those masks can be used to interface with other systems like object detection (which identifies and labels what an object is) and other computer vision applications. Such system work more robustly if they already know where to look, after all. This blog post from Meta AI goes into some additional detail about what’s possible with SAM, and fuller details are in the research paper.

Systems like this rely on quality datasets. Of course, nothing beats a great collection of real-world data but we’ve also seen that it’s possible to machine-generate data that never actually existed, and get useful results.

Source: Need To Pick Objects Out Of Images? Segment Anything Does Exactly That | Hackaday

A Computer Generated Swatting Service Is Causing Havoc Across America

[…]

Motherboard has found, this synthesized call and another against Hempstead High School were just one small part of a months-long, nationwide campaign of dozens, and potentially hundreds, of threats made by one swatter in particular who has weaponized computer generated voices. Known as “Torswats” on the messaging app Telegram, the swatter has been calling in bomb and mass shooting threats against highschools and other locations across the country.

[…]

For $75, Torswats says they will close down a school. For $50, Torswats says customers can buy “extreme swattings,” in which authorities will handcuff the victim and search the house. Torswats says they offer discounts to returning customers, and can negotiate prices for “famous people and targets such as Twitch streamers.” Torswats says on their Telegram channel that they take payment in cryptocurrency.

[…]

Torswats’ use of synthetic voices allows them to carry out swatting threats at scale with relatively little effort, while also protecting what their own voice sounds like.

[…]

Motherboard’s reporting on Torswats comes as something of a nationwide swatting trend spreads across the United States. In October, NPR reported that 182 schools in 28 states received fake threat calls. Torswats’ use of a computer generated voice also comes as the rise of artificial intelligence poses even greater risks to those who may face harassment online. In February, Motherboard reported that someone had doxed and harassed a series of voice actors by having an artificial intelligence program read out their home addresses

[…]

On their Telegram channel, Torswats has uploaded at least 35 distinct recordings of calls they appear to have made. Torswats may have made many more swatting calls on others’ behalf, though: each filename includes a number, with the most recent going up to 170. Torswats also recently shuttered their channel before reappearing on Telegram in February.

In all of those 35 recordings except two, Torswats appears to have used a synthesized voice. The majority of the calls are made with a fake male sounding voice; several include a woman which also appears to be computer generated.

Torswats is seemingly able to change what the voice is saying in something close to real-time in order to respond to the operator’s questions. These sometimes include “where are you located,” “what happened,” and “what is your name?”

[…]

After publication of this article, Torswats deleted the audio recordings from their Telegram channel and claimed they were stopping the service for at least one month. “Time to dip a bit,” they wrote on the channel.

Source: A Computer Generated Swatting Service Is Causing Havoc Across America

Disabling Intel and AMD’s Backdoors On Modern computers

Despite some companies making strides with ARM, for the most part, the desktop and laptop space is still dominated by x86 machines. For all their advantages, they have a glaring flaw for anyone concerned with privacy or security in the form of a hardware backdoor that can access virtually any part of the computer even with the power off. AMD calls their system the Platform Security Processor (PSP) and Intel’s is known as the Intel Management Engine (IME).

To fully disable these co-processors a computer from before 2008 is required, but if you need more modern hardware than that which still respects your privacy and security concerns you’ll need to either buy an ARM device, or disable the IME like NovaCustom has managed to do with their NS51 series laptop.

NovaCustom specializes in building custom laptops with customizations for various components and specifications to fit their needs, including options for the CPU, GPU, RAM, storage, keyboard layout, and other considerations. They favor Coreboot as a bootloader which already goes a long way to eliminating proprietary closed-source software at a fundamental level, but not all Coreboot machines have the IME completely disabled. There are two ways to do this, the HECI method which is better than nothing but not fully trusted, and the HAP bit, which completely disables the IME. NovaCustom is using the HAP bit approach to disable the IME, meaning that although it’s not completely eliminated from the computer, it is turned off in a way that’s at least good enough for computers that the NSA uses.

There are a lot of new computer manufacturers building conscientious hardware nowadays, but (with the notable exception of System76) the IME and PSP seem to be largely ignored by most computing companies we’d otherwise expect to care about an option like this. It’s certainly still an area of concern considering how much power the IME and PSP are given over their host computers, and we have seen even mainline manufacturers sometimes offer systems with the IME disabled. The only other options to solve this problem are based around specific motherboards for 8th and 9th generation Intel desktops, or you can go way back to hardware from 2008 and install libreboot to eliminate, rather than disable, the IME.

Source: Disabling Intel’s Backdoors On Modern Laptops | Hackaday

Italy finds decently good out to really stupid ban: Demands OpenAI Allow ChatGPT User Corrections After Ban

In a news announcement on Wednesday, the Italian Data Protection Authority, known as the Garante, stressed that OpenAI needed to be more transparent about its data collection processes and inform users about their data rights with regards to the generative AI. These rights include allowing users and non-users of ChatGPT to object to having their data processed by OpenAI and letting them correct false or inaccurate information about them generated by ChatGPT, similar to rights related to other technologies guaranteed by Europe’s General Data Protection Regulation, or GDPR, laws.

Other measures required by the Garante include a public notice on OpenAI’s website “describing the arrangements and logic of the data processing required for the operation of ChatGPT along with the rights afforded to data subjects.” The regulator will also require OpenAI to immediately implement an age gating system for ChatGPT and submit a plan to implement an age verification system by May 31.

The Italian regulator said OpenAI had until April 30 to implement the measures it’s asking for.

[…]

Source: Italy Demands OpenAI Allow ChatGPT User Corrections After Ban

Allowing users to correct is in principle a Good Idea, but then you get Wikipedia types of battles on who is the arbiter of truth. Of course, no one system will ever be 100% truthful or accurate, so banning it for this is just stupid. No age gate keeper works either and neither did the ban – people can circumvent these very very easily. So Italy needs some sort of concession to get out of the hole it’s dug itself and this is at least a promising start.

Scientists unveil new and improved ‘skinny donut’ black hole image using ML algorithm

The 2019 release of the first image of a black hole was hailed as a significant scientific achievement. But truth be told, it was a bit blurry – or, as one astrophysicist involved in the effort called it, a “fuzzy orange donut.”

Scientists on Thursday unveiled a new and improved image of this black hole – a behemoth at the center of a nearby galaxy – mining the same data used for the earlier one but improving its resolution by employing image reconstruction algorithms to fill in gaps in the original telescope observations.

[…]

The ring of light – that is, the material being sucked into the voracious object – seen in the new image is about half the width of how it looked in the previous picture. There is also a larger “brightness depression” at the center – basically the donut hole – caused by light and other matter disappearing into the black hole.

The image remains somewhat blurry due to the limitations of the data underpinning it – not quite ready for a Hollywood sci-fi blockbuster, but an advance from the 2019 version.

This supermassive black hole resides in a galaxy called Messier 87, or M87, about 54 million light-years from Earth. A light year is the distance light travels in a year, 5.9 trillion miles (9.5 trillion km). This galaxy, with a mass 6.5 billion times that of our sun, is larger and more luminous than our Milky Way.

[…]

Lia Medeiros of the Institute for Advanced Study in Princeton, New Jersey, lead author of the research published in the Astrophysical Journal Letters.

The study’s four authors are members of the Event Horizon Telescope (EHT) project, the international collaboration begun in 2012 with the goal of directly observing a black hole’s immediate environment. A black hole’s event horizon is the point beyond which anything – stars, planets, gas, dust and all forms of electromagnetic radiation – gets swallowed into oblivion.

Medeiros said she and her colleagues plan to use the same technique to improve upon the image of the only other black hole ever pictured – released last year showing the one inhabiting the Milky Way’s center, called Sagittarius A*, or Sgr A*.

The M87 black hole image stems from data collected by seven radio telescopes at five locations on Earth that essentially create a planet-sized observational dish.

“The EHT is a very sparse array of telescopes. This is something we cannot do anything about because we need to put our telescopes on the tops of mountains and these mountains are few and far apart from each other. Most of the Earth is covered by oceans,” said Georgia Tech astrophysicist and study co-author Dimitrios Psaltis.

“As a result, our telescope array has a lot of ‘holes’ and we need to rely on algorithms that allow us to fill in the missing data,” Psaltis added. “The image we report in the new paper is the most accurate representation of the black hole image that we can obtain with our globe-wide telescope.”

The machine-learning technique they used is called PRIMO, short for “principal-component interferometric modeling.”

“This is the first time we have used machine learning to fill in the gaps where we don’t have data,” Medeiros said. “We use a large data set of high-fidelity simulations as a training set, and find an image that is consistent with the data and also is broadly consistent with our theoretical expectations. The fact that the previous EHT results robustly demonstrated that the image is a ring allows us to assume so in our analysis.”

Source: Scientists unveil new and improved ‘skinny donut’ black hole image | Reuters

Scientists create structural paint that stays cool underneath, doesn’t fade, extremely light and no toxins

[…]

Debashis Chanda, a nanoscience researcher with the University of Central Florida, and his team have created a way to mimic nature’s ability to reflect light and create beautifully vivid color without absorbing any heat like traditional pigments do.

Chanda’s research, published in the journal Science Advances, explains and explores structural color and how people could use it to live cooler in a rapidly warming world.

Structural colors are created not from traditional pigmentation but from the arrangement of colorless materials to reflect light in certain ways. This process is how rainbows are made after it rains and how suncatchers bend light to create dazzling displays of color.

[…]

One driver for the researchers: A desire to avoid toxic materials

To create these colors, synthetic materials like heavy metals are used to create vivid paints.

“We use a lot of artificially synthesized organic molecules, lots of metal,” Chanda told NPR. “Think about your deep blues, you need cobalt, a deep red needs cadmium. They are toxic. We are polluting our nature and our whole habitat by using this kind of paint. So one of the major motivations for us was to create a color based on non-toxic material.”

So why can’t we simply use ground-up peacock feathers to recreate its vivid greens, blues and golds? It’s because they have no pigment. Some of the brightest colors in nature aren’t pigmented at all, peacock feathers included.

These bright, beautiful colors are achieved by the bending and reflection of light. The way the structure of a wing, a feather or other material reflects light back at the viewer. It doesn’t absorb any light, it beams it back out in the form of a visible color, and this is where things get interesting.

Chanda’s research began here, with his fascination with natural colors and how they are achieved in nature.

Beyond just the beautiful arrays of color that structure can create, Chanda also found that unlike pigments, structural paint does not absorb any infrared light.

Infrared light is the reason black cars get hot on sunny days and asphalt is hot to the touch in summer. Infrared light is absorbed as heat energy into these surfaces — the darker the color, the more the surface colored with it can absorb. That’s why people are advised to wear lighter colors in hotter climates and why many buildings are painted bright whites and beiges.

Chanda found that structural color paint does not absorb any heat. It reflects all infrared light back out. This means that in a rapidly warming climate, this paint could help communities keep cool.

Chanda and his team tested the impact this paint had on the temperature of buildings covered in structural paint versus commercial paints and they found that structural paint kept surfaces 20 to 30 degrees cooler.

This, Chanda said, is a massive new tool that could be used to fight rising temperatures caused by global warming while still allowing us to have a bright and colorful world.

Unlike white and black cars, structural paint’s ability to reflect heat isn’t determined by how dark the color is. Blue, black or purple structural paints reflect just as much heat as bright whites or beige. This opens the door for more colorful, cooler architecture and design without having to worry about the heat.

A little paint goes a long way

It’s not just cleaner, Chanda said. Structural paint weighs much less than pigmented paint and doesn’t fade over time like traditional pigments.

“A raisin’s worth of structural paint is enough to cover the front and back of a door,” he said.

Unlike pigments which rely on layers of pigment to achieve depth of color, structural paint only requires one thin layer of particles to fully cover a surface in color. This means that structural paint could be a boon for aerospace engineers who rely on the lowest weight possible to achieve higher fuel efficiency.

[…]

Source: Scientists create an eco-friendly paint : NPR

Google debuts deps.dev API to check security status of dependencies

[…]

On Tuesday, Google – which has answered the government’s call to secure the software supply chain with initiatives like the Open Source Vulnerabilities (OSV) database and Software Bills of Materials (SBOMs) – announced an open source software vetting service, its deps.dev API.

The API, accessible in a more limited form via the web, aims to provide software developers with access to security metadata on millions of code libraries, packages, modules, and crates.

By security metadata, Google means things like: how well maintained a library is, who maintains it, what vulnerabilities are known to be present in it and whether they have been fixed, whether it’s had a code review, whether it’s using old or new versions of other dependencies, what license covers it, and so on. For example, see the info on the Go package cmdr and the Rust Cargo crate crossbeam-utils.

The API also provides at least two capabilities not available through the web interface: the ability to query the hash of a file’s contents (to find all package versions with the file) and dependency graphs based on actual installation rather than just declarations.

“Software supply chain attacks are increasingly common and harmful, with high profile incidents such as Log4Shell, Codecov, and the recent 3CX hack,” said Jesper Sarnesjo and Nicky Ringland, with Google’s open source security team, in a blog post. “The overwhelming complexity of the software ecosystem causes trouble for even the most diligent and well-resourced developers.”

[…]

The deps.dev API indexes data from various software package registries, including Rust’s Cargo, Go, Maven, JavaScript’s npm, and Python’s PyPI, and combines that with data gathered from GitHub, GitLab, and Bitbucket, as well as security advisories from OSV. The idea is to make metadata about software packages more accessible, to promote more informed security decisions.

Developers can query the API to look up a dependency’s records, with the returned data available programmatically to CI/CD systems, IDE plugins that present the information, build tools and policy engines, and other development tools.

Sarnesjo and Ringland say they hope the API helps developers understand dependency data better so that they can respond to – or prevent – attacks that try to compromise the software supply chain.

There are already hundreds of software supply chain tools and projects, but the more the merrier. Judging by the average life expectancy of Google services, the deps.dev API should be available for at least four years.

Along similar lines, Google Cloud on Wednesday nudged its Assured Open Source Software (Assured OSS) service for Java and Python into general availability.

[…]

Source: Google debuts API to check security status of dependencies • The Register

Mitsubishi 3000GT Car Phone Modded To Work Like an iPhone, link to full 3 year journey included

Software engineer Jeff Lau, posting under the username UselessPickles, showed off the restored car phone in a video uploaded to YouTube. The Mitsubishi came from the factory with an optional “DiamondTel” handset and hands-free system, which was rendered inoperable by the discontinuation of analog “AMPS” cell service in the U.S. in 2008. (The 3G shutdown bricked a ton of newer cars’ connectivity features, too.)

After three years of work, Lau restored the device’s functionality using a custom Bluetooth adapter. Lau engineered the adapter to piggyback between the stock phone transceiver and hands-free control unit located under the trunk carpet. That let Lau tap into modern cell networks with his 1993 car phone—but he didn’t stop there.

Paired with a smartphone, the stock handset displays the name of the paired device and the signal strength of the smartphone’s network. It gets better: The car’s hands-free microphone feeds the smartphone voice commands (to Apple’s Siri in this case). It’s pretty much all the functionality of a 2023 hands-free system but without the distraction of a touchscreen.

Obviously, that isn’t about to become a widespread resto-mod trend soon. The lengthy dev time, low take rate of car phones in their day, and uniqueness of individual cars’ systems mean we’re probably not about to see off-the-shelf car phone restoration kits soon. But the fact that bringing car phones back is possible will hopefully inspire someone else out there to resuscitate theirs—maybe even one of those retro Chrysler VisorPhones will ride one day again. Or ring, I should say.

Source: Clever Collector Mods Mitsubishi 3000GT Car Phone To Work Like an iPhone

The whole process is laid out in this forum thread, starting on 23/12/21: Making a Bluetooth adapter for a Car Phone from the 90’s

Streaming Services Urged To Clamp Down on AI-Generated Music by Record Labels

Universal Music Group has told streaming platforms, including Spotify and Apple, to block artificial intelligence services from scraping melodies and lyrics from their copyrighted songs, according to emails viewed by the Financial Times. From the report: UMG, which controls about a third of the global music market, has become increasingly concerned about AI bots using their songs to train themselves to churn out music that sounds like popular artists. AI-generated songs have been popping up on streaming services and UMG has been sending takedown requests “left and right,” said a person familiar with the matter. The company is asking streaming companies to cut off access to their music catalogue for developers using it to train AI technology. “We will not hesitate to take steps to protect our rights and those of our artists,” UMG wrote to online platforms in March, in emails viewed by the FT. “This next generation of technology poses significant issues,” said a person close to the situation. “Much of [generative AI] is trained on popular music. You could say: compose a song that has the lyrics to be like Taylor Swift, but the vocals to be in the style of Bruno Mars, but I want the theme to be more Harry Styles. The output you get is due to the fact the AI has been trained on those artists’ intellectual property.”

Source: Streaming Services Urged To Clamp Down on AI-Generated Music – Slashdot

Basically they don’t want AI’s listening to their music as an inspiration for them to make music. Which is exactly what humans do. So I’m very curious what legal basis would accept their takedowns.

New Map of Dark Matter Supports Einstein’s Theory of Gravity

Scientists using data from the Atacama Cosmology Telescope in Chile have made a detailed map of dark matter’s distribution across a quarter of the sky.

The map shows regions the distribution of mass extending essentially as far we can see back in time; it uses the cosmic microwave background as a backdrop for the dark matter portrait. The team’s research will be presented at the Future Science with CMB x LSS conference in Kyoto, Japan.

“We have mapped the invisible dark matter across the sky to the largest distances, and clearly see features of this invisible world that are hundreds of millions of light-years across,” said Blake Sherwin, a cosmologist at the University of Cambridge, in a Princeton University release. “It looks just as our theories predict.”

[…]

the only way dark matter is observed is indirectly, in the way its gravitational effects are observed at large scales. Enter the Atacama Cosmology Telescope, which more precisely dated the universe in 2021. The telescope’s map builds on a map of the universe’s matter released earlier this year, which was produced using data from the Dark Energy Survey and the South Pole Telescope. That map upheld previous estimations of the ratio of ordinary matter to dark matter and found that the distribution of the matter was less clumpy than previously thought.

The new map homes in on a lingering concern of Einstein’s general relativity: how the most massive objects in the universe, like supermassive black holes, bend light from more distant sources. One such source is the cosmic microwave background, the most ancient detectable light, which radiates from the aftermath of the Big Bang.

The researchers effectively used the background as a backlight, to illuminate regions of greater density in the universe.

“It’s a bit like silhouetting, but instead of just having black in the silhouette, you have texture and lumps of dark matter, as if the light were streaming through a fabric curtain that had lots of knots and bumps in it,” said Suzanne Staggs, director of the Atacama Cosmology Telescope and a physicist at Princeton, in the university release.

The cosmic microwave background as seen by the European Space Agency's Planck observatory.
The cosmic microwave background as seen by the European Space Agency’s Planck observatory.
Image: ESA

“The famous blue and yellow CMB image is a snapshot of what the universe was like in a single epoch, about 13 billion years ago, and now this is giving us the information about all the epochs since,” Staggs added.

The recent analysis suggests that the dark matter was lumpy enough to fit with the standard model of cosmology, which relies on Einstein’s theory of gravity.

Eric Baxter, an astronomer at the University of Hawai’i and a co-author of the research that resulted in the February dark matter map, told Gizmodo in an email that his team’s map was sensitive to low-redshifts (meaning close by, in the more recent universe). On the other hand, the newer map focuses exclusively on the lensing of the cosmic microwave background, meaning higher redshifts and a more sweeping scale.

“Said another way, our measurements and the new ACT measurements are probing somewhat different (and complementary) aspects of the matter distribution,” Baxter said. “Thus, rather than contradicting our previous results, the new results may be providing an important new piece of the puzzle about possible discrepancies with our standard cosmological model.”

“Perhaps the Universe is less lumpy than expected on small scales and at recent times (i.e. the regime probed by our analysis), but is consistent with expectations at earlier times and at larger scales,” Baxter added.

New instruments should help tease out the matter distribution of the universe. An upcoming telescope at the Simons Observatory in the Atacama is set to begin operations in 2024 and will map the sky nearly 10 times faster than the Atacama Cosmology Telescope, according to the Princeton release.

[…]

Source: New Map of Dark Matter Validates Einstein’s Theory of Gravity

Physicists Discover that Gravity Can Create Light

Researchers have discovered that in the exotic conditions of the early universe, waves of gravity may have shaken space-time so hard that they spontaneously created radiation.

[…]

a team of researchers have discovered that an exotic form of parametric resonance may have even occurred in the extremely early universe.

Perhaps the most dramatic event to occur in the entire history of the universe was inflation. This is a hypothetical event that took place when our universe was less than a second old. During inflation our cosmos swelled to dramatic proportions, becoming many orders of magnitude larger than it was before. The end of inflation was a very messy business, as gravitational waves sloshed back and forth throughout the cosmos.

Normally gravitational waves are exceedingly weak. We have to build detectors that are capable of measuring distances less than the width of an atomic nucleus to find gravitational waves passing through the Earth. But researchers have pointed out that in the extremely early universe these gravitational waves may have become very strong.

And they may have even created standing wave patterns where the gravitational waves weren’t traveling but the waves stood still, almost frozen in place throughout the cosmos. Since gravitational waves are literally waves of gravity, the places where the waves are the strongest represent an exceptional amount of gravitational energy.

The researchers found that this could have major consequences for the electromagnetic field existing in the early universe at that time. The regions of intense gravity may have excited the electromagnetic field enough to release some of its energy in the form of radiation, creating light.

This result gives rise to an entirely new phenomenon: the production of light from gravity alone. There’s no situation in the present-day universe that could allow this process to happen, but the researchers have shown that the early universe was a far stranger place than we could possibly imagine.

Source: Physicists Discover that Gravity Can Create Light – Universe Today

EVE Online player uses CEO vote to pull off the biggest heist in the game’s history

Back in 2017, we learned about the biggest heist in EVE Online history (opens in new tab): A year-long inside job that ultimately made off with an estimated 1.5 triillion ISK, worth around $10,000 in real money. But now another EVE player claims to have pulled off a heist worth significantly more than that—and with significantly less work involved.

The 2017 heist, like so many of EVE’s most interesting stories, relied primarily on social engineering: Investing months or years of time into grooming a target before pulling the rug out from beneath them. But redditor Flam_Hill (opens in new tab) said this job was less bloody: Instead of betrayal, this theft was dependent upon learning and exploiting the “shares mechanic” in EVE Online in order to leverage a takeover of Event Horizon Expeditionaries, a 299-member corporation that was part of the Pandemic Horde alliance.

Using a “clean account with a character with a little history,” Flan_Hill and an unnamed partner applied for membership in the EHEXP corporation. After the account was accepted, Flan_Hill transferred enough of his shares in the corporation to the infiltrator to enable a call for a vote for a new CEO. The conspirators both voted yes, while nobody else in the corporation voted at all.

This was vital, because after 72 hours the two “yes” votes carried the day. The infiltrating agent was very suddenly made CEO, which was in turn used to make Flan_Hill an Event Horizon Expeditionaries director, at which point they removed all the other corporate directors and set to emptying the coffers.

They stripped 130 billion ISK from the corporate wallet, but that was only a small part of the haul: Counting all stolen assets, including multiple large ships, Flam_Hill estimated the total value of the heist at 2.23 trillion ISK, which works out to more than $22,300 in real money. ISK can’t be legally cashed out of EVE Online, but it can be used to buy Plex (opens in new tab), an in-game currency used to upgrade accounts, purchase virtual goods, and activate other services.

[…]

The one aspect of the story that some redditors took issue with is the origin of the 1,000 shares in Event Horizon Expeditionaries that made this theft possible in the first place.

[…]

It all comes down to EVE’s corporation voting system (opens in new tab): Any member of a corporation holding more than 5% of the total shares can start a vote, and—this is what it really comes down to—”the option that gains more than 50% of cast votes wins the vote.” This is why the inattentiveness of EHEXP membership was so vital: Flam_Hill and his partner were the only ones to vote “yes,” so they had 100% of the cast votes and were thus able to seize power.

[…]

EVE Online developer CCP Games eliminated any doubt by confirming that the heist did in fact take place, although it declined to comment on the value of the theft.

In the end, it turned out that the “former CEO” theory was correct. Speaking to PC Gamer, the mastermind of the heist, known in EVE as Sienna d’Orien—real name Dave—confirmed that he was in fact the founder and former chief of Event Horizon Expeditionaries, which is how he had the shares in the company that enabled the takeover. He quit EVE in 2018, citing burnout and other priorities, but returned in 2022 to find EHEXP “a shell of its former self.”

After forming a new group, Dave reached out to the corporation to inquire about getting some of his old assets back, but was ignored. His partner in the heist, Packratt, then brought up the shares mechanic, and they went to work. They were aided by a third friend and former EHEXP member, Highlander McLeod, who handled some of the research in order to keep d’Orien’s name out of it—although McLeod was kept in the dark about the job until it was over, in order to ensure operational security.

[…]

They managed to pull the job off with virtually complete anonymity, but Dave said he’s stepping out of the shadows because “it will get out eventually” anyway—and it probably doesn’t hurt that he can now bask in the glory of the moment.

[…]

As for Dave, who’s now playing “in a new corp with old mates,” he acknowledged that the heist could complicate his in-game life somewhat: He’ll be an interstellar folk hero to some (people love a good EVE heist) but no doubt a villain—and a target—in the eyes of others.

[…]

Source: EVE Online player uses obscure rule to pull off the biggest heist in the game’s history | PC Gamer

Google’s free Assured Open Source Software service hits GA

About a year ago, Google announced its Assured Open Source Software (Assured OSS) service, a service that helps developers defend against supply chain security attacks by regularly scanning and analyzing for vulnerabilities some of the world’s most popular software libraries. Today, Google is launching Assured OSS into general availability with support for well over a thousand Java and Python packages — and while Google didn’t initially disclose pricing when it first announced the service, the company has now revealed that it will be available for free.

Software development has long depended on third-party libraries (which are often maintained by only a single developer), but it wasn’t until the industry got hit with a number of high-profile exploits that everyone (including the White House) perked up and started taking software supply chain security seriously. Now, you can’t attend an open source conference without hearing about Software Bills of Materials (SBOMs), artifact registries and similar topics

[…]

Google promises that it will constantly keep these libraries up to date (without creating forks) and continuously scan for known vulnerabilities, do fuzz tests to discover new ones and then fix these issues and contribute these fixes back upstream. The company notes that when it first launched the service with around 250 Java libraries, it was responsible for discovering 48% of the new CVEs for these libraries and subsequently addressing them.

[…]

By partnering with a trusted supplier, organizations can mitigate these risks and ensure the integrity of their software supply chain to better protect their business applications.”

Developers and organizations that want to use the new service can sign up here and then integrate Assured OSS into their existing development pipeline.

Source: Google’s free Assured Open Source Software service hits GA | TechCrunch

 

Google announces GUAC open source project on software supply chains

Google unveiled a new open source security project on Thursday centered around software supply chain management.

Given the acronym GUAC – which stands for Graph for Understanding Artifact Composition – the project is focused on creating sets of data about a software’s build, security and dependency.

Google worked with Purdue University, Citibank and supply chain security company Kusari on GUAC, a free tool built to bring together many different sources of software security metadata. Google has also assembled a group of technical advisory members to help with the project — including IBM, Intel, Anchore and more.

Google’s Brandon Lum, Mihai Maruseac, Isaac Hepworth pitched the effort as one way to help address the explosion in software supply chain attacks — most notably the widespread Log4j vulnerability that is still leaving organizations across the world exposed to attacks.

“GUAC addresses a need created by the burgeoning efforts across the ecosystem to generate software build, security, and dependency metadata,” they wrote in a blog post. “GUAC is meant to democratize the availability of this security information by making it freely accessible and useful for every organization, not just those with enterprise-scale security and IT funding.”

They noted that U.S. President Joe Biden issued an executive order last year that said all federal government agencies must send a Software Bill of Materials (SBOM) to Allan Friedman, the director Cybersecurity Initiatives at National Telecommunications and Information Administration (NIST).

[…]

While SBOMs are becoming increasingly common thanks to the work of several tech industry groups like OpenSSF, there have been a number of complaints, one of those centered around the difficulty of sorting through troves of metadata, some of which is not useful.

Maruseac, Lum and Hepworth explained that it is difficult to combine and collate the kind of information found in many SBOMs.

“The documents are scattered across different databases and producers, are attached to different ecosystem entities, and cannot be easily aggregated to answer higher-level questions about an organization’s software assets,” they said.

Google shared a proof of concept of the project, which allows users to search data sets of software metadata.

The three explained that GUAC effectively aggregates software security metadata into a database and makes it searchable.

They used the example of a CISO or compliance officer that needs to understand the “blast radius” of a vulnerability. GUAC would allow them to “trace the relationship between a component and everything else in the portfolio.”

Google says the tool will allow anyone to figure out the most used critical components in their software supply chain ecosystem, the security weak points and any risky dependencies.

[…]

Source: Google announces GUAC open source project on software supply chains

US starts looking at AI regulation, Seeks Public Input to Boost AI Accountability

Today, the Department of Commerce’s National Telecommunications and Information Administration (NTIA) launched a request for comment (RFC) to advance its efforts to ensure artificial intelligence (AI) systems work as claimed – and without causing harm. The insights gathered through this RFC will inform the Biden Administration’s ongoing work to ensure a cohesive and comprehensive federal government approach to AI-related risks and opportunities.

[…]

NTIA’s “AI Accountability Policy Request for Comment” seeks feedback on what policies can support the development of AI audits, assessments, certifications and other mechanisms to create earned trust in AI systems that they work as claimed. Much as financial audits create trust in the accuracy of a business’ financial statements, so for AI, such mechanisms can help provide assurance that an AI system is trustworthy in that it does what it is intended to do without adverse consequences.

[…]

President Biden has been clear that when it comes to AI, we must both support responsible innovation and ensure appropriate guardrails to protect Americans’ rights and safety. The White House Office of Science and Technology Policy’s Blueprint for an AI Bill of Rights provides an important framework to guide the design, development, and deployment of AI and other automated systems. The National Institute of Standards and Technology’s (NIST) AI Risk Management Framework serves as a voluntary tool that organizations can use to manage risks posed by AI systems.

Comments will be due 60 days from publication of the RFC in the Federal Register.

[…]

Source: NTIA Seeks Public Input to Boost AI Accountability | National Telecommunications and Information Administration