Startup Claims It’s Sending Sulfur Into the Atmosphere to Fight Climate Change

A startup says it has begun releasing sulfur particles into Earth’s atmosphere, in a controversial attempt to combat climate change by deflecting sunlight. Make Sunsets, a company that sells carbon offset “cooling credits” for $10 each, is banking on solar geoengineering to cool down the planet and fill its coffers. The startup claims it has already released two test balloons, each filled with about 10 grams of sulfur particles and intended for the stratosphere, according to the company’s website and first reported on by MIT Technology Review.

The concept of solar geoengineering is simple: Add reflective particles to the upper atmosphere to reduce the amount of sunlight that penetrates from space, thereby cooling Earth. It’s an idea inspired by the atmospheric side effects of major volcanic eruptions, which have led to drastic, temporary climate shifts multiple times throughout history, including the notorious “year without a summer” of 1816.

Yet effective and safe implementation of the idea is much less simple. Scientists and engineers have been studying solar geoengineering as a potential climate change remedy for more than 50 years. But almost nobody has actually enacted real-world experiments because of the associated risks, like rapid changes in our planet’s precipitation patterns, damage to the ozone layer, and significant geopolitical ramifications.

[…]

if and when we get enough sulfur into the atmosphere to meaningfully cool Earth, we’d have to keep adding new particles indefinitely to avoid entering an era of climate change about four to six times worse than what we’re currently experiencing, according to one 2018 study. Sulfur aerosols don’t stick around very long. Their lifespan in the stratosphere is somewhere between a few days and a couple years, depending on particle size and other factors.

[…]

Rogue agents independently deciding to impose geoengineering on the rest of us has been a concern for as long as the thought of intentionally manipulating the atmosphere has been around. The Pentagon even has dedicated research teams working on methods to detect and combat such clandestine attempts. But effectively defending against solar geoengineering is much more difficult than just doing it.

In Iseman’s rudimentary first trials, he says he released two weather balloons full of helium and sulfur aerosols somewhere in Baja California, Mexico. The founder told MIT Technology Review that the balloons rose toward the sky but, beyond that, he doesn’t know what happened to them, as the balloons lacked tracking equipment. Maybe they made it to the stratosphere and released their payload, maybe they didn’t.

[…]

Iseman and Make Sunsets claim that a single gram of sulfur aerosols counteracts the warming effects of one ton of CO2. But there is no clear scientific basis for such an assertion, geoengineering researcher Shuchi Talati told the outlet. And so the $10 “cooling credits” the company is hawking are likely bunk (along with most carbon credit/offset schemes.)

Even if the balloons made it to the stratosphere, the small amount of sulfur released wouldn’t be enough to trigger significant environmental effects, said David Keith to MIT Technology Review.

[…]

The solution to climate change is almost certainly not a single maverick “disrupting” the composition of Earth’s stratosphere. But that hasn’t stopped Make Sunsets from reportedly raising nearly $750,000 in funds from venture capital firms. And for just ~$29,250,000 more per year, the company claims it can completely offset current warming. It’s not a bet we recommend taking.

Source: Startup Claims It’s Sending Sulfur Into the Atmosphere to Fight Climate Change

University students are using AI to write essays. Teachers are learning how to embrace that

As word of students using AI to automatically complete essays continues to spread, some lecturers are beginning to rethink how they should teach their pupils to write.

Writing is a difficult task to do well. The best novelists and poets write furiously, dedicating their lives to mastering their craft. The creative process of stringing together words to communicate thoughts is often viewed as something complex, mysterious, and unmistakably human. No wonder people are fascinated by machines that can write too.

[…]

Although AI can generate text with perfect spelling, great grammar and syntax, the content often isn’t that good beyond a few paragraphs. The writing becomes less coherent over time with no logical train of thought to follow. Language models fail to get their facts right – meaning quotes, dates, and ideas are likely false. Students will have to inspect the writing closely and correct mistakes for their work to be convincing.

Prof: AI-assisted essays ‘not good’

Scott Graham, associate professor at the Department of Rhetoric & Writing at the University of Texas at Austin, tasked his pupils with writing a 2,200-word essay about a campus-wide issue using AI. Students were free to lightly edit and format their work with the only rule being that most of the essay had to be automatically generated by software.

In an opinion article on Inside Higher Ed, Graham said the AI-assisted essays were “not good,” noting that the best of the bunch would have earned a C or C-minus grade. To score higher, students would have had to rewrite more of the essay using their own words to improve it, or craft increasingly narrower and specific prompts to get back more useful content.

“You’re not going to be able to push a button or submit a short prompt and generate a ready-to-go essay,” he told The Register.

[…]

“I think if students can do well with AI writing, it’s not actually all that different from them doing well with their own writing. The main skills I teach and assess mostly happen after the initial drafting,” he said.

“I think that’s where people become really talented writers; it’s in the revision and the editing process. So I’m optimistic about [AI] because I think that it will provide a framework for us to be able to teach that revision and editing better.

“Some students have a lot of trouble sometimes generating that first draft. If all the effort goes into getting them to generate that first draft, and then they hit the deadline, that’s what they will submit. They don’t get a chance to revise, they don’t get a chance to edit. If we can use those systems to speed write the first draft, it might really be helpful,” he opined.

[…]

Listicles, informal blog posts, or news articles will be easier to imitate than niche academic papers or literary masterpieces. Teachers will need to be thoughtful about the essay questions they set and make sure students’ knowledge are really being tested, if they don’t want them to cut corners.

[…]

“The onus now is on writing teachers to figure out how to get to the same kinds of goals that we’ve always had about using writing to learn. That includes students engaging with ideas, teaching them how to formulate thoughts, how to communicate clearly or creatively. I think all of those things can be done with AI systems, but they’ll be done differently.”

The line between using AI as a collaborative tool or a way to cheat, however, is blurry. None of the academics teaching writing who spoke to The Register thought students should be banned from using AI software. “Writing is fundamentally shaped by technology,” Vee said.

“Students use spell check and grammar check. If I got a paper where a student didn’t use these, it stands out. But it used to be, 50 years ago, writing teachers would complain that students didn’t know how to spell so they would teach spelling. Now they don’t.”

Most teachers, however, told us they would support regulating the use of AI-writing software in education

[…]

Mills was particularly concerned about AI reducing the need for people to think for themselves, considering language models carry forward biases in their training data. “Companies have decided what to feed it and we don’t know. Now, they are being used to generate all sorts of things from novels to academic papers, and they could influence our thoughts or even modify them. That is an immense power, and it’s very dangerous.”

Lauren Goodlad, professor of English and Comparative Literature at Rutgers University, agreed. If they parrot what AI comes up with, students may end up more likely to associate Muslims with terrorism or mention conspiracy theories, for example.

[…]

“As teachers, we are experimenting, not panicking,” Monroe told The Register.

“We want to empower our students as writers and thinkers. AI will play a role… This is a time of exciting and frenzied development, but educators move more slowly and deliberately… AI will be able to assist writers at every stage, but students and teachers will need tools that are thoughtfully calibrated.”

[…]

 

Source: University students are using AI to write essays. Now what? • The Register

FSF Warns: Stay Away From iPhones, Amazon, Netflix, and Music Steaming Services

For the last thirteen years the Free Software Foundation has published its Ethical Tech Giving Guide. But what’s interesting is this year’s guide also tags companies and products with negative recommendations to “stay away from.” Stay away from: iPhones
It’s not just Siri that’s creepy: all Apple devices contain software that’s hostile to users. Although they claim to be concerned about user privacy, they don’t hesitate to put their users under surveillance.

Apple prevents you from installing third-party free software on your own phone, and they use this control to censor apps that compete with or subvert Apple’s profits.

Apple has a history of exploiting their absolute control over their users to silence political activists and help governments spy on millions of users.

Stay away from: M1 MacBook and MacBook Pro
macOS is proprietary software that restricts its users’ freedoms.

In November 2020, macOS was caught alerting Apple each time a user opens an app. Even though Apple is making changes to the service, it just goes to show how bad they try to be until there is an outcry.

Comes crawling with spyware that rats you out to advertisers.

Stay away from: Amazon
Amazon is one of the most notorious DRM offenders. They use this Orwellian control over their devices and services to spy on users and keep them trapped in their walled garden.

Be aware that Amazon isn’t the peddler of ebook DRM. Disturbingly, it’s enthusiastically supported by most of the big publishing houses.

Read more about the dangers of DRM through our Defective by Design campaign.

Stay away from: Spotify, Apple Music, and all other major streaming services
In addition to streaming music encumbered by DRM, people who want to use Spotify are required to install additional proprietary software. Even Spotify’s client for GNU/Linux relies on proprietary software.

Apple Music is no better, and places heavy restrictions on the music streamed through the platform.

Stay away from: Netflix
Netflix is continuing its disturbing trend of making onerous DRM the norm for streaming media. That’s why they were a target for last year’s International Day Against DRM (IDAD).

They’re also leveraging their place in the Motion Picture Association of America (MPAA) to advocate for tighter restrictions on users, and drove the effort to embed DRM into the fabric of the Web.

“In your gift giving this year, put freedom first,” their guide begins.

And for a freedom-respecting last-minute gift idea, they suggest giving the gift of a FSF membership (which comes with a code and a printable page “so that you can present your gift as a physical object, if you like.”) The membership is valid for one year, and includes the many benefits that come with an FSF associate membership, including a USB member card, email forwarding, access to our Jitsi Meet videoconferencing server and member forum, discounts in the FSF shop and on ThinkPenguin hardware, and more.

If you are in the United States, your gift would also be fully tax-deductible in the USA.

Source: FSF Warns: Stay Away From iPhones, Amazon, Netflix, and Music Steaming Services – Slashdot

Mickey’s Copyright Adventure: Early Disney Creation Will Soon Be Public Property – finally. What lawsuits lie in wait?

The version of the iconic character from “Steamboat Willie” will enter the public domain in 2024. But those trying to take advantage could end up in a legal mousetrap. From a report: There is nothing soft and cuddly about the way Disney protects the characters it brings to life. This is a company that once forced a Florida day care center to remove an unauthorized Minnie Mouse mural. In 2006, Disney told a stonemason that carving Winnie the Pooh into a child’s gravestone would violate its copyright. The company pushed so hard for an extension of copyright protections in 1998 that the result was derisively nicknamed the Mickey Mouse Protection Act. For the first time, however, one of Disney’s marquee characters — Mickey himself — is set to enter the public domain.

“Steamboat Willie,” the 1928 short film that introduced Mickey to the world, will lose copyright protection in the United States and a few other countries at the end of next year, prompting fans, copyright experts and potential Mickey grabbers to wonder: How is the notoriously litigious Disney going to respond? “I’m seeing in Reddit forums and on Twitter where people — creative types — are getting excited about the possibilities, that somehow it’s going to be open season on Mickey,” said Aaron J. Moss, a partner at Greenberg Glusker in Los Angeles who specializes in copyright and trademark law. “But that is a misunderstanding of what is happening with the copyright.” The matter is more complicated than it appears, and those who try to capitalize on the expiring “Steamboat Willie” copyright could easily end up in a legal mousetrap. “The question is where Disney tries to draw the line on enforcement,” Mr. Moss said, “and if courts get involved to draw that line judicially.”

Only one copyright is expiring. It covers the original version of Mickey Mouse as seen in “Steamboat Willie,” an eight-minute short with little plot. This nonspeaking Mickey has a rat-like nose, rudimentary eyes (no pupils) and a long tail. He can be naughty. In one “Steamboat Willie” scene, he torments a cat. In another, he uses a terrified goose as a trombone. Later versions of the character remain protected by copyrights, including the sweeter, rounder Mickey with red shorts and white gloves most familiar to audiences today. They will enter the public domain at different points over the coming decades. “Disney has regularly modernized the character, not necessarily as a program of copyright management, at least initially, but to keep up with the times,” said Jane C. Ginsburg, an authority on intellectual property law who teaches at Columbia University.

Source: Mickey’s Copyright Adventure: Early Disney Creation Will Soon Be Public Property – Slashdot

How it’s remotely possible that a company is capitalising on a thought someone had around 100 years ago is beyond me.

The LastPass disclosure of leaked password vaults is being torn apart by security experts

Last week, just before Christmas, LastPass dropped a bombshell announcement: as the result of a breach in August, which led to another breach in November, hackers had gotten their hands on users’ password vaults. While the company insists that your login information is still secure, some cybersecurity experts are heavily criticizing its post, saying that it could make people feel more secure than they actually are and pointing out that this is just the latest in a series of incidents that make it hard to trust the password manager.

LastPass’ December 22nd statement was “full of omissions, half-truths and outright lies,” reads a blog post from Wladimir Palant, a security researcher known for helping originally develop AdBlock Pro, among other things. Some of his criticisms deal with how the company has framed the incident and how transparent it’s being; he accuses the company of trying to portray the August incident where LastPass says “some source code and technical information were stolen” as a separate breach when he says that in reality the company “failed to contain” the breach.

He also highlights LastPass’ admission that the leaked data included “the IP addresses from which customers were accessing the LastPass service,” saying that could let the threat actor “create a complete movement profile” of customers if LastPass was logging every IP address you used with its service.

Another security researcher, Jeremi Gosney, wrote a long post on Mastodon explaining his recommendation to move to another password manager. “LastPass’s claim of ‘zero knowledge’ is a bald-faced lie,” he says, alleging that the company has “about as much knowledge as a password manager can possibly get away with.”

LastPass claims its “zero knowledge” architecture keeps users safe because the company never has access to your master password, which is the thing that hackers would need to unlock the stolen vaults. While Gosney doesn’t dispute that particular point, he does say that the phrase is misleading. “I think most people envision their vault as a sort of encrypted database where the entire file is protected, but no — with LastPass, your vault is a plaintext file and only a few select fields are encrypted.”

Palant also notes that the encryption only does you any good if the hackers can’t crack your master password, which is LastPass’ main defense in its post: if you use its defaults for password length and strengthening and haven’t reused it on another site, “it would take millions of years to guess your master password using generally-available password-cracking technology” wrote Karim Toubba, the company’s CEO.

“This prepares the ground for blaming the customers,” writes Palant, saying that “LastPass should be aware that passwords will be decrypted for at least some of their customers. And they have a convenient explanation already: these customers clearly didn’t follow their best practices.” However, he also points out that LastPass hasn’t necessarily enforced those standards. Despite the fact that it made 12-character passwords the default in 2018, Palant says, “I can log in with my eight-character password without any warnings or prompts to change it.”

LastPass’ post has even elicited a response from a competitor, 1Password — on Wednesday, the company’s principal security architect Jeffrey Goldberg wrote a post for its site titled “Not in a million years: It can take far less to crack a LastPass password.” In it, Goldberg calls LastPass’ claim of it taking a million years to crack a master password “highly misleading,” saying that the statistic appears to assume a 12 character, randomly generated password. “Passwords created by humans come nowhere near meeting that requirement,” he writes, saying that threat actors would be able to prioritize certain guesses based on how people construct passwords they can actually remember.

Of course, a competitor’s word should probably be taken with a grain of salt, though Palant echos a similar idea in his post — he claims the viral XKCD method of creating passwords would take around 3 years to guess with a single GPU, while some 11-character passwords (that many people may consider to be good) would only take around 25 minutes to crack with the same hardware. It goes without saying that a motivated actor trying to crack into a specific target’s vault could probably throw more than one GPU at the problem, potentially cutting that time down by orders of magnitude.

Both Gosney and Palant take issue with LastPass’ actual cryptography too, though for different reasons. Gosney accuses the company of basically committing “every ‘crypto 101’ sin” with how its encryption is implemented and how it manages data once it’s been loaded into your device’s memory.

Meanwhile, Palant criticizes the company’s post for painting its password-strengthening algorithm, known as PBKDF2, as “stronger-than-typical.” The idea behind the standard is that it makes it harder to brute-force guess your passwords, as you’d have to perform a certain number of calculations on each guess. “I seriously wonder what LastPass considers typical,” writes Palant, “given that 100,000 PBKDF2 iterations are the lowest number I’ve seen in any current password manager.”

[…]

Source: The LastPass disclosure of leaked password vaults is being torn apart by security experts – The Verge

EarSpy: Spying on Phone Calls via Ear Speaker Vibrations Captured by Accelerometer

As smartphone manufacturers are improving the ear speakers in their devices, it can become easier for malicious actors to leverage a particular side-channel for eavesdropping on a targeted user’s conversations, according to a team of researchers from several universities in the United States.

The attack method, named EarSpy, is described in a paper published just before Christmas by researchers from Texas A&M University, Temple University, New Jersey Institute of Technology, Rutgers University, and the University of Dayton.

EarSpy relies on the phone’s ear speaker — the speaker at the top of the device that is used when the phone is held to the ear — and the device’s built-in accelerometer for capturing the tiny vibrations generated by the speaker.

[…]

Android security has improved significantly and it has become increasingly difficult for malware to obtain the required permissions.

On the other hand, accessing raw data from the motion sensors in a smartphone does not require any special permissions. Android developers have started placing some restrictions on sensor data collection, but the EarSpy attack is still possible, the researchers said.

A piece of malware planted on a device could use the EarSpy attack to capture potentially sensitive information and send it back to the attacker.

[…]

The researchers discovered that attacks such as EarSpy are becoming increasingly feasible due to the improvements made by smartphone manufacturers to ear speakers. They conducted tests on the OnePlus 7T and the OnePlus 9 smartphones — both running Android — and found that significantly more data can be captured by the accelerometer from the ear speaker due to the stereo speakers present in these newer models compared to the older model OnePlus phones, which did not have stereo speakers.

The experiments conducted by the academic researchers analyzed the reverberation effect of ear speakers on the accelerometer by extracting time-frequency domain features and spectrograms. The analysis focused on gender recognition, speaker recognition, and speech recognition.

In the gender recognition test, whose goal is to determine whether the target is male or female, the EarSpy attack had a 98% accuracy. The accuracy was nearly as high, at 92%, for detecting the speaker’s identity.

When it comes to actual speech, the accuracy was up to 56% for capturing digits spoken in a phone call.

Source: EarSpy: Spying on Phone Calls via Ear Speaker Vibrations Captured by Accelerometer