The production of glass—one of humanity’s oldest materials—is getting a 21st century makeover. A new approach to glassmaking treats the material like plastic, allowing scientists to injection mold vaccine vials, sinuous channels for carrying out lab chemistry, and other complex shapes.
“It’s a really exciting paper,” says André Studart, a materials scientist at ETH Zürich. “This is a great way to form glass into complicated and interesting geometries.”
[…]
In 2017, researchers led by Frederik Kotz, a microsystems engineer at the Albert Ludwig University of Freiburg, set out to change that. They reworked a 3D printer to forge glass rather than printing plastics or metals.
The scientists created a printable powder by mixing silica nanoparticles with a polymer that could be cured with ultraviolet (UV) light. After printing the shapes they wanted, they cured the polymer with UV light so it would hold its shape. They then fired the mix in an oven to burn off the polymer and fuse the silica particles into a continuous glass structure.
The approach worked, making it possible to craft shapes such as tiny pretzels and replica castle gates. The work garnered interest from companies wanting to build minute lenses and other complex transparent optical components for telecommunications equipment. But the procedure was slow, turning out components one by one, rather than a fully industrial approach that could produce parts en masse, as is done with plastic.
To speed things up, Kotz and his colleagues have now extended their nanocomposite approach to work with injection molding, a process used to mass produce plastic parts like toys and car bumpers by the ton. The researchers again started with tiny silica particles. The team then mixed the silica with two polymers, polyethylene glycol (PEG) and polyvinyl butyral (PVB). The mixture created a dry powder with the consistency of toothpaste. The team fed the paste into an extruder that pressed it into a preformed mold with shapes such as a disc or tiny gear.
Outside of the mold, the parts hold their shape because myriad weak attractive bonds, called van der Waals interactions, form between neighboring silica particles. But the parts are still fragile.
To harden them, the researchers used water to wash away the PEG. They then fired the remaining material in two stages: First at 600°C to burn out the PVB, and second at 1300°C to fuse the silica particles into the final piece.
“What you get in the end is high purity silica glass” in any shape you want, Kotz says. The glass parts also end up with the optical and chemical characteristics needed for commercial telecommunications devices and chemical reactors, he and his colleagues report today in Science.
[…]
However, Studart says this new approach to mass producing glass parts still faces a bottleneck: Washing away the PEG must be done slowly, over days, to ensure the glass parts don’t crack. Speed that up, he says, and injection molding of glass could become as popular as it is with plastic.
Signal announced on Tuesday that as a part of its latest beta, it’s adding support for a new Signal Payments feature that allows Signal users to send “privacy focused payments as easily as sending or receiving a message.”
These payments are only going to be available to Android and iOS Signal users in the UK during this beta, and will use one specific payment network: MobileCoin, an open-source cryptocurrency that is itself still a prototype, according to the MobileCoin GitHub repo. The same page notes that the MobileCoin Wallet that someone would need in order to send these payments back and forth isn’t yet available for download by anyone in the U.S. As Wired notes, however, this is a new feature that the company wants to expand globally once it’s out of its infancy.
Unlike other popular texting apps that also offer a payment component—like, say, Facebook Messenger—MobileCoin doesn’t rely on funneling money from a user’s bank account in order to function. Instead, it’s a currency that lives on the blockchain, allowing payments made over MobileCoin to bypass the banking systems that routinely work with major data brokers in order to pawn off people’s transaction data.
Many technologists viscerally felt yesterday’s announcement as a punch to the gut when we heard that the Signal messaging app was bundling an embedded cryptocurrency. This news really cut to heart of what many technologists have felt before when we as loyal users have been exploited and betrayed by corporations, but this time it felt much deeper because it introduced a conflict of interest from our fellow technologists that we truly believed were advancing a cause many of us also believed in. So many of us have spent significant time and social capital moving our friends and family away from the exploitative data siphon platforms that Facebook et al offer, and on to Signal in the hopes of breaking the cycle of commercial exploitation of our online relationships. And some of us feel used.
Signal users are overwhelmingly tech savvy consumers and we’re not idiots. Do they think we don’t see through the thinly veiled pump and dump scheme that’s proposed? It’s an old scam with a new face.
Allegedly the controlling entity prints 250 million units of some artificially scarce trashcoin called MOB (coincidence?) of which the issuing organization controls 85% of the supply. This token then floats on a shady offshore cryptocurrency exchange hiding in the Cayman Islands or the Bahamas, where users can buy and exchange the token. The token is wash traded back and forth by insiders and the exchange itself to artificially pump up the price before it’s dumped on users in the UK to buy to allegedly use as “payments”. All of this while insiders are free to silently use information asymmetry to cash out on the influx of pumped hype-driven buys before the token crashes in value. Did I mention that the exchange that floats the token is the primary investor in the company itself, does anyone else see a major conflict of interest here?
Let it be said that everything here is probably entirely legal or there simply is no precedent yet. The question everyone is asking before these projects launch now though is: should it be?
Some people on Reddit are throwing about that they donated so they feel they should be able to tell the developers what they should and should not be doing as well.
IMHO an open source developer is free to work on whatever projects they choose and combine them as much as they want. They are not “paid” by the couple of dollars someone donates every month. This is a completely optional extra setting which is off by default. Signal is not mining crypto with the app. People are free to fork Signal into another project without the payment option. Is it a pump and dump? I hope not. What is for sure though is that money is tight in the Free Open Source (FOSS) arena and it’s not surprising that people are jumping in strange directions to find a way to monetise a hugely popular product which is only causing them stress due to rude, know it all users who refuse to actually contribute, an idealistic fanatic mindset by the FOSS group who have salaries and hardly any income at all.
Before he lost it all—all $20 billion—Bill Hwang was the greatest trader you’d never heard of.
Starting in 2013, he parlayed more than $200 million left over from his shuttered hedge fund into a mind-boggling fortune by betting on stocks. Had he folded his hand in early March and cashed in, Hwang, 57, would have stood out among the world’s billionaires. There are richer men and women, of course, but their money is mostly tied up in businesses, real estate, complex investments, sports teams, and artwork. Hwang’s $20 billion net worth was almost as liquid as a government stimulus check. And then, in two short days, it was gone.
[…]
Modest on the outside, Hwang had all the swagger he needed inside the Wall Street prime-brokerage departments that finance big investors. He was a “Tiger cub,” an alumnus of Tiger Management, the hedge fund powerhouse that Julian Robertson founded. In the 2000s, Hwang ran his own fund, Tiger Asia Management, which peaked at about $10 billion in assets.
It didn’t matter that he’d been accused of insider trading by U.S. securities regulators or that he pleaded guilty to wire fraud on behalf of Tiger Asia in 2012. Archegos, the family office he founded to manage his personal wealth, was a lucrative client for the banks, and they were eager to lend Hwang enormous sums.
On March 25, when Hwang’s financiers were finally able to compare notes, it became clear that his trading strategy was strikingly simple. Archegos appears to have plowed most of the money it borrowed into a handful of stocks—ViacomCBS, GSX Techedu, and Shopify among them.
[…]
At least once, Hwang stepped over the line between aggressive and illegal. In 2012, after years of investigations, the U.S. Securities and Exchange Commission accused Tiger Asia of insider trading and manipulation in two Chinese bank stocks. The agency said Hwang “crossed the wall,” receiving confidential information about pending share offerings from the underwriting banks and then using it to reap illicit profits.
Hwang settled that case without admitting or denying wrongdoing, and Tiger Asia pleaded guilty to a U.S. Department of Justice charge of wire fraud.
[…]
U.S. rules prevent individual investors from buying securities with more than 50% of the money borrowed on margin. No such limits apply to hedge funds and family offices. People familiar with Archegos say the firm steadily ramped up its leverage. Initially that meant about “2x,” or $1 million borrowed for every $1 million of capital. By late March the leverage was 5x or more.
Hwang also kept his banks in the dark by trading via swap agreements. In a typical swap, a bank gives its client exposure to an underlying asset, such as a stock. While the client gains—or loses—from any changes in price, the bank shows up in filings as the registered holder of the shares.
That’s how Hwang was able to amass huge positions so quietly. And because lenders had details only of their own dealings with him, they, too, couldn’t know he was piling on leverage in the same stocks via swaps with other banks. ViacomCBS Inc. is one example. By late March, Archegos had exposure to tens of millions of shares of the media conglomerate through Morgan Stanley, Goldman Sachs Group Inc., Credit Suisse, and Wells Fargo & Co. The largest holder of record, indexing giant Vanguard Group Inc., had 59 million shares.
[…]
At some point in the past few years, Hwang’s investments shifted from mainly tech companies to a more eclectic mix. Media conglomerates ViacomCBS and Discovery Inc. became huge holdings. So did at least four Chinese stocks: GSX Techedu, Baidu, Iqiyi, and Vipshop.
Although it’s impossible to know exactly when Archegos did those swap trades, there are clues in the regulatory filings by his banks. Starting in the second quarter of 2020, all Hwang’s banks became big holders of stocks he bet on. Morgan Stanley went from 5.22 million shares of Vipshop Holdings Ltd. as of June 30, to 44.6 million by Dec. 31.
Leverage was playing a growing role, and Hwang was looking for more. Credit Suisse and Morgan Stanley had been doing business with Archegos for years, unperturbed by Hwang’s brush with regulators. Goldman, however, had blacklisted him. Compliance officials who frowned on his checkered past blocked repeated efforts internally to open an account for Archegos, according to people with direct knowledge of the matter.
[…]
The fourth quarter of 2020 was a fruitful one for Hwang. While the S&P 500 rose almost 12%, seven of the 10 stocks Archegos was known to hold gained more than 30%, with Baidu, Vipshop, and Farfetch jumping at least 70%.
All that activity made Archegos one of Wall Street’s most coveted clients. People familiar with the situation say it was paying prime brokers tens of millions of dollars a year in fees, possibly more than $100 million in total. As his swap accounts churned out cash, Hwang kept accumulating extra capital to invest—and to lever up. Goldman finally relented and signed on Archegos as a client in late 2020. Weeks later it all would end in a flash.
Damage to Hwang’s Investments
Share price
Data: Compiled by Bloomberg
The first in a cascade of events during the week of March 22 came shortly after the 4 p.m. close of trading that Monday in New York. ViacomCBS, struggling to keep up with Apple TV, Disney+, Home Box Office, and Netflix, announced a $3 billion sale of stock and convertible debt. The company’s shares, propelled by Hwang’s buying, had tripled in four months. Raising money to invest in streaming made sense. Or so it seemed in the ViacomCBS C-suite.
Instead, the stock tanked 9% on Tuesday and 23% on Wednesday. Hwang’s bets suddenly went haywire, jeopardizing his swap agreements. A few bankers pleaded with him to sell shares; he would take losses and survive, they reasoned, avoiding a default. Hwang refused, according to people with knowledge of those discussions, the long-ago lesson from Robertson evidently forgotten.
That Thursday his prime brokers held a series of emergency meetings. Hwang, say people with swaps experience, likely had borrowed roughly $85 million for every $20 million, investing $100 and setting aside $5 to post margin as needed. But the massive portfolio had cratered so quickly that its losses blew through that small buffer as well as his capital.
The dilemma for Hwang’s lenders was obvious. If the stocks in his swap accounts rebounded, everyone would be fine. But if even one bank flinched and started selling, they’d all be exposed to plummeting prices. Credit Suisse wanted to wait.
Late that afternoon, without a word to its fellow lenders, Morgan Stanley made a preemptive move. The firm quietly unloaded $5 billion of its Archegos holdings at a discount, mainly to a group of hedge funds. On Friday morning, well before the 9:30 a.m. New York open, Goldman started liquidating $6.6 billion in blocks of Baidu, Tencent Music Entertainment Group, and Vipshop. It soon followed with $3.9 billion of ViacomCBS, Discovery, Farfetch, Iqiyi, and GSX Techedu.
When the smoke finally cleared, Goldman, Deutsche Bank AG, Morgan Stanley, and Wells Fargo had escaped the Archegos fire sale unscathed. There’s no question they moved faster to sell. It’s also possible they had extended less leverage or demanded more margin. As of now, Credit Suisse and Nomura appear to have sustained the greatest damage. Mitsubishi UFJ Financial Group Inc., another prime broker, has disclosed $300 million in likely losses.
It’s all eerily reminiscent of the subprime-mortgage crisis 14 years ago. Then, as now, the trouble was a series of increasingly irresponsible loans. As long as housing prices kept rising, lenders ignored the growing risks. Only when homeowners stopped paying did reality bite: The banks all had financed so much borrowing that the fallout couldn’t be contained.
[…]
The best thing anyone can say about the Archegos collapse is that it didn’t spark a market meltdown. The worst thing is that it was an entirely preventable disaster made possible by Hwang’s lenders. Had they limited his leverage or insisted on more visibility into the business he did across Wall Street, Archegos would have been playing with fire instead of dynamite. It might not have defaulted. Regulators are to blame, too. As Congress was told at hearings following the GameStop Corp. debacle in January, there’s not enough transparency in the stock market. European rules require the party bearing the economic risk of an investment to disclose its interest. In the U.S., whales such as Hwang can stay invisible.
We updated our personal data leak checker database with more than 780,000 email addresses associated with this leak. Use it to find out if your LinkedIn profile has been scraped by the threat actors.
Days after a massive Facebook data leak made the headlines, it seems like we’re in for another one, this time involving LinkedIn.
An archive containing data purportedly scraped from 500 million LinkedIn profiles has been put for sale on a popular hacker forum, with another 2 million records leaked as a proof-of-concept sample by the post author.
The four leaked files contain information about the LinkedIn users whose data has been allegedly scraped by the threat actor, including their full names, email addresses, phone numbers, workplace information, and more.
To see if your email address has been exposed in this data leak or other security breaches, use our personal data leak checker with a library of 15+ billion breached records.
While users on the hacker forum can view the leaked samples for about $2 worth of forum credits, the threat actor appears to be auctioning the much-larger 500 million user database for at least a 4-digit sum, presumably in bitcoin.
The author of the post claims that the data was scraped from LinkedIn. Our investigation team was able to confirm this by looking at the samples provided on the hacker forum. However, it’s unclear whether the threat actor is selling up-to-date LinkedIn profiles, or if the data has been taken or aggregated from a previous breach suffered by LinkedIn or other companies.
We asked LinkedIn if they could confirm that the leak was genuine, and whether they have alerted their users and clients, but we have received no reply from the company at the time of writing this report.
What was leaked?
Based on the samples we saw from the leaked files, they appear to contain a variety of mostly professional information from LinkedIn profiles, including:
A database containing the phone numbers of more than half a billion Facebook users is being freely traded online, and Facebook is trying to pin the blame on everyone but themselves.
A blog post titled “The Facts on News Reports About Facebook Data,” published Tuesday evening, is designed to silence the growing criticism the company is facing for failing to protect the phone numbers and other personal information of 533 million users after a database containing that information was shared for free in low level hacking forums over the weekend, as first reported by Business Insider.
Facebook initially dismissed the reports as irrelevant, claiming the data was leaked years ago and so the fact it had all been collected into one uber database containing one in every 15 people on the planet—and was now being given away for free—didn’t really matter.
[…]
But, instead of owning up to its latest failure to protect user data, Facebook is pulling from a familiar playbook: just like it did during the Cambridge Analytica scandal in 2018, it’s attempting to reframe the security failure as merely a breach of its terms of service.
So instead of apologizing for failing to keep users’ data secure, Facebook’s product management director Mike Clark began his blog post by making a semantic point about how the data was leaked.
“It is important to understand that malicious actors obtained this data not through hacking our systems but by scraping it from our platform prior to September 2019,” Clark wrote.
This is the identical excuse given in 2018, when it was revealed that Facebook had given Cambridge Analytica the data of 87 million users without their permission, for use in political ads.
Clark goes on to explain that the people who collected this data—sorry, “scraped” this data—did so by using a feature designed to help new users find their friends on the platform.
“This feature was designed to help people easily find their friends to connect with on our services using their contact lists,” Clark explains.
The contact importer feature allowed new users to upload their contact lists and match those numbers against the numbers stored on people’s profiles. But like most of Facebook’s best features, the company left it wide open to abuse by hackers.
“Effectively, the attacker created an address book with every phone number on the planet and then asked Facebook if his ’friends’ are on Facebook,” security expert Mikko Hypponen explained in a tweet.
Clark’s blog post doesn’t say when the “scraping” took place or how many times the vulnerability was exploited, just that Facebook fixed the issue in August 2019. Clark also failed to mention that Facebook was informed of this vulnerability way back in 2017, when Inti De Ceukelaire, an ethical hacker from Belgium, disclosed the problem to the company.
And, the company hasn’t explained why a number of users who have deleted their accounts long before 2018 have seen their phone numbers turn up in this database.
[…]
“While we addressed the issue identified in 2019, it’s always good for everyone to make sure that their settings align with what they want to be sharing publicly,” Clark wrote.
“In this case, updating the ‘How People Find and Contact You’ control could be helpful. We also recommend people do regular privacy checkups to make sure that their settings are in the right place, including who can see certain information on their profile and enabling two-factor authentication.”
It’s an audacious move for a company worth over $300 billion, with $61 billion cash on hand, to ask its users to secure their own information, especially considering how byzantine and complex the company’s settings menus can be.
Thankfully for the half a billion Facebook users who’ve been impacted by the breach, there’s a more practical way to get help. Troy Hunt, a cyber security consultant and founder of Have I Been Pwned has uploaded the entire leaked database to his website that allows anyone to check whether their phone number is listed in the leaked database.
Austrian privacy activist Max Schrems has filed a complaint against Google in France alleging that the US tech giant is illegally tracking users on Android phones without their consent.
Android phones generate unique advertising codes, similar to Apple’s Identifier for Advertisers (IDFA), that allow Google and third parties to track users’ browsing behavior in order to better target them with advertising.
In a complaint filed on Wednesday, Schrems’ campaign group Noyb argued that in creating and storing these codes without first obtaining explicit permission from users, Google was engaging in “illegal operations” that violate EU privacy laws.
Noyb urged France’s data privacy regulator to launch a probe into Google’s tracking practices and to force the company to comply with privacy rules. It argued that fines should be imposed on the tech giant if the watchdog finds evidence of wrongdoing.
“Through these hidden identifiers on your phone, Google and third parties can track users without their consent,” said Stefano Rossetti, privacy lawyer at Noyb. “It is like having powder on your hands and feet, leaving a trace of everything you do on your phone—from whether you swiped right or left to the song you downloaded.”
[…]
Last year, Schrems won a landmark case at Europe’s highest court that ruled a transatlantic agreement on transferring data between the bloc and the US used by thousands of corporations did not protect EU citizens’ privacy.
On the 27th anniversary of Kurt Cobain’s death, Engadget reports: Were he still alive today, Nirvana frontman Kurt Cobain would be 52 years old. Every February 20th, on the day of his birthday, fans wonder what songs he would write if he hadn’t died of suicide nearly 30 years ago. While we’ll never know the answer to that question, an AI is attempting to fill the gap.
A mental health organization called Over the Bridge used Google’s Magenta AI and a generic neural network to examine more than two dozen songs by Nirvana to create a ‘new’ track from the band. “Drowned in the Sun” opens with reverb-soaked plucking before turning into an assault of distorted power chords. “I don’t care/I feel as one, drowned in the sun,” Nirvana tribute band frontman Eric Hogan sings in the chorus. In execution, it sounds not all that dissimilar from “You Know You’re Right,” one of the last songs Nirvana recorded before Cobain’s death in 1994.
Other than the voice of Hogan, everything you hear in the song was generated by the two AI programs Over the Bridge used. The organization first fed Magenta songs as MIDI files so that the software could learn the specific notes and harmonies that made the band’s tunes so iconic. Humorously, Cobain’s loose and aggressive guitar playing style gave Magenta some trouble, with the AI mostly outputting a wall of distortion instead of something akin to his signature melodies. “It was a lot of trial and error,” Over the Bridge board member Sean O’Connor told Rolling Stone. Once they had some musical and lyrical samples, the creative team picked the best bits to record. Most of the instrumentation you hear are MIDI tracks with different effects layered on top.
Some thoughts from The Daily Dot: Rolling Stone also highlighted lyrics like, “The sun shines on you but I don’t know how,” and what is called “a surprisingly anthemic chorus” including the lines, “I don’t care/I feel as one, drowned in the sun,” remarking that they “bear evocative, Cobain-esque qualities….”
Neil Turkewitz went full Comic Book Guy, opining, “A perfect illustration of the injustice of developing AI through the ingestion of cultural works without the authorization of [its] creator, and how it forces creators to be indentured servants in the production of a future out of their control,” adding, “That it’s for a good cause is irrelevant.”
This notice claims to identify several problematic URLs that allegedly infringe the copyrights of Disney’s hit series The Mandalorian. This is not unexpected, as The Mandalorian was the most pirated TV show of last year, as we reported in late December. However, we didn’t expect to see our article as one of the targeted links in the notice. Apparently, the news that The Mandalorian is widely pirated — which was repeated by dozens of other publications — is seen as copyright infringement?
Needless to say, we wholeheartedly disagree. This is not the way.
TorrentFreak specifies that the article in question “didn’t host or link to any infringing content.” (TorrentFreak’s article was even linked to by major sites including CNET, Forbes, Variety, and even Slashdot.)
TorrentFreak also reports that it wasn’t Disney who filed the takedown request, but GFM Films… At first, we thought that the German camera company GFM could have something to do with it, as they worked on The Mandalorian. However, earlier takedown notices from the same sender protected the film “The Last Witness,” which is linked to the UK company GFM Film Sales. Since we obviously don’t want to falsely accuse anyone, we’re not pointing fingers.
So what happens next? We will certainly put up a fight if Google decides to remove the page. At the time of writing, this has yet to happen. The search engine currently lists the takedown request as ‘pending,’ which likely means that there will be a manual review. The good news is that Google is usually pretty good at catching overbroad takedown requests. This is also true for TorrentFreak articles that were targeted previously, including our coverage on the Green Book screener leak.
A user in a low level hacking forum on Saturday published the phone numbers and personal data of hundreds of millions of Facebook users for free online.
The exposed data includes personal information of over 533 million Facebook users from 106 countries, including over 32 million records on users in the US, 11 million on users in the UK, and 6 million on users in India. It includes their phone numbers, Facebook IDs, full names, locations, birthdates, bios, and — in some cases — email addresses.
Insider reviewed a sample of the leaked data and verified several records by matching known Facebook users’ phone numbers with the IDs listed in the data set. We also verified records by testing email addresses from the data set in Facebook’s password reset feature, which can be used to partially reveal a user’s phone number.
A Facebook spokesperson told Insider that the data was scraped due to a vulnerability that the company patched in 2019.
[…]
This is not the first time that a huge number of Facebook users’ phone numbers have been found exposed online. The vulnerability that was uncovered in 2019 allowed millions of people’s phone numbers to be scraped from Facebook’s servers in violation of its terms of service. Facebook said that vulnerability was patched in August 2019.
Facebook previously vowed to crack down on mass data-scraping after Cambridge Analytica scraped the data of 80 million users in violation of Facebook’s terms of service to target voters with political ads in the 2016 election.
Sierra Nevada Corporation (SNC) has unveiled plans for an enormous inflatable space station tended by cargo and crew carrying versions of its Dream Chaser spaceplane.
“There is no scalable space travel industry without a spaceplane,” said SNC chair and owner Eren Ozmen.
That’s handy, because with the retirement of the Space Shuttle, the Dream Chaser is nearasdammit the last spaceplane standing. NASA, however, disagreed and selected Boeing’s Calamity Capsule and SpaceX’s Crew Dragon for transportation purposes to and from the International Space Station (ISS).
The space agency did, however, pop SNC into the second round of ISS Commercial Resupply Services (CRS-2), meaning the reusable cargo version of the spaceplane will see orbital action once assembly is complete (due this summer with launch expected late in 2022), but the crew version was not to be troubling the old Space Shuttle runway at Kennedy Space Center.
SNC’s proposal for a space station as an alternative for the ageing ISS is the LIFE habitat: a 27-foot-long, three-storey inflatable module that launches on a conventional rocket and inflates once in orbit. A full-sized prototype is currently being transferred from Johnson Space Center in Texas to Kennedy Space Center in Florida.
The crewed version of the Dream Chaser has also been resurrected and is planned to be used to both “shuttle” private astronauts (we see what you did there, SNC) as well as “rescuing astronauts from space destinations and returning them to Earth via a safe and speedy runway landing.”
wiredogshares a ZDNet report: I have literally been covering SCO’s legal attempts to prove that IBM illegally copied Unix’s source code into Linux for over 17 years. I’ve written well over 500 stories on this lawsuit and its variants. I really thought it was dead, done, and buried. I was wrong. Xinuos, which bought SCO’s Unix products and intellectual property (IP) in 2011, like a bad zombie movie, is now suing IBM and Red Hat [for] “illegally Copying Xinuos’ software code for its server operating systems.” For those of you who haven’t been around for this epic IP lawsuit, you can get the full story with “27 eight-by-ten color glossy photographs and circles and arrows and a paragraph on the back of each one” from Groklaw. If you’d rather not spend a couple of weeks going over the cases, here’s my shortened version. Back in 2001, SCO, a Unix company, joined forces with Caldera, a Linux company, to form what should have been a major Red Hat rival. Instead, two years later, SCO sued IBM in an all-out legal attack against Linux.
The fact that most of you don’t know either company’s name gives you an idea of how well that lawsuit went. SCO’s Linux lawsuit made no sense and no one at the time gave it much of a chance of succeeding. Over time it was revealed that Microsoft had been using SCO as a sock puppet against Linux. Unfortunately for Microsoft and SCO, it soon became abundantly clear that SCO didn’t have a real case against Linux and its allies. SCO lost battle after battle. The fatal blow came in 2007 when SCO was proven to have never owned the copyrights to Unix. So, by 2011, the only thing of value left in SCO, its Unix operating systems, was sold to UnXis. This acquisition, which puzzled most, actually made some sense. SCO’s Unix products, OpenServer and Unixware, still had a small, but real market. At the time, UnXis now under the name, Xinuos, stated it had no interest in SCO’s worthless lawsuits. In 2016, CEO Sean Synder said, “We are not SCO. We are investors who bought the products. We did not buy the ability to pursue litigation against IBM, and we have absolutely no interest in that.” So, what changed? The company appears to have fallen on hard times. As Synder stated: “systems, like our FreeBSD-based OpenServer 10, have been pushed out of the market.” Officially, in his statement, Snyder now says, “While this case is about Xinuos and the theft of our intellectual property, it is also about market manipulation that has harmed consumers, competitors, the open-source community, and innovation itself.”
Apparently, if the GPS on your shiny new DJI FPV Drone detects that it’s not in the United States, it will turn down its transmitter power so as not to run afoul of the more restrictive radio limits elsewhere around the globe. So while all the countries that have put boots on the Moon get to enjoy the full 1,412 mW of power the hardware is capable of, the drone’s software limits everyone else to a paltry 25 mW. As you can imagine, that leads to a considerable performance penalty in terms of range.
But not anymore. A web-based tool called B3YOND promises to reinstate the full power of your DJI FPV Drone no matter where you live by tricking it into believing it’s in the USA. Developed by the team at [D3VL], the unlocking tool uses the new Web Serial API to send the appropriate “FCC Mode” command to the drone’s FPV goggles over USB. Everything is automated, so this hack is available to anyone who’s running a recent version of Chrome or Edge and can click a button a few times.
Finding an extra $10 charge on your groceries is enough to make most people angry, but what if you paid twice for a a $56,000 car? Tesla buyers have been reporting that they’ve been double-charged on cars for recent purchases and have had trouble contacting the company and getting their money back, according to a report from CNBC and posts on Twitter and the Tesla Motors Club forum.
[…]
As of yesterday, the customers mentioned in the CNBC report have yet to receive their refunds and all have refused to take delivery until the problem is resolved. “This was not some operator error,” Peterson said. “And for a company that has so much technology skill, to have this happening to multiple people really raises questions.” Engadget has reached out for comment.
Virgin Galactic took to YouTube to reveal, briefly, its first SpaceShip III, which will start ground tests and “glide flights” later this year. It’s an eye-catching vessel, channeling that Star Wars: The Phantom MenaceNaboo starship look in a wonderful way. It’s finished with a mirror-like material that’s meant to reflect its surroundings, whether that’s the blackness of space or the blueness of Earth’s atmosphere. It’s not all about aesthetics: it also offers thermal protection.
now, for the first time ever, scientists have evidence showing they can reverse false memories, according to a study published in the journal Proceedings of the National Academy of Sciences.
“The same way that you can suggest false memories, you can reverse them by giving people a different framing,” the lead researcher of the paper, Aileen Oeberst, head of the Department of Media Psychology at the University of Hagen, told Gizmodo. “It’s interesting, scary even.”
[…]
“As the field of memory research has developed, it’s become very clear that our memories are not ‘recordings’ of the past that can be played back but rather are reconstructions, closer to imaginings informed by seeds of true experiences,” Christopher Madan, a memory researcher at the University of Nottingham who was not involved in the new study, told Gizmodo
[…]
Building off of that, Oeberst’s lab recently implanted false memories in 52 people by using suggestive interviewing techniques. First, they had the participants’ parents privately answer a questionnaire and come up with some real childhood memories and two plausible, but fake, ones—all negative in nature, such as how their pet died or when they lost their toy. Then they had researchers ask the participants to recall these made-up events in a detailed manner, including specifics about what happened. For example, “Your parents told us that when you were 12 years old during a holiday in Italy with your family you got lost. Can you tell me more about it?”
The test subjects met their interviewer three times, once every two weeks, and by the third session most participants believed these anecdotes were true, and over half (56%) developed and recollected actual false memories—a significantly higher percentage than most studies in this area of research.
These findings reveal the depth of false memory and fit closely with prior research in the field, according to Robert Nash, a psychologist at Aston University who was not involved in the study. “Such as the fact that some of the false memories arose almost immediately, even in the first interview, the fact that they increased in richness and frequency with each successive interview, and the fact that more suggestive techniques led to much higher levels of false remembering and believing,” Nash told Gizmodo.
According to Henry Otgaar, a false memory researcher at Maastricht University who was a reviewer of this study, there’s been an increase in people thinking that it’s difficult to implant false memories. This work is important in showing the relative ease by which people can form such false memories, he told Gizmodo.
“Actually, what we see in lab experiments is highly likely underestimation of what we see in real-world cases, in which, for example, a police officer or a therapist, suggestively is dredging for people’s memories that perhaps are not there for weeks, for months, in a highly suggestive fashion,” he said, suggesting this is what happens in some cases of false confessions.
But researchers, to some extent, already knew how easy it is to trick our memories. Oeberst’s study is innovative in suggesting that it’s equally as easy to reverse those false memories. And knowing the base truth about what actually happened isn’t even necessary to revert the fake recollections.
In the experiment, Oeberst had another interviewer ask participants to identify whether any of their memories could be false, by simply thinking critically about them. The scientists used two “sensitization” techniques: One, source sensitization, where they asked participants to recall the exact source of the memory (what is leading you to remember this; what specific recollection do you, yourself, have?). And two, false memory sensitization, where they explained to the subjects that sometimes being pressured to recall something can elicit false memories.
“And they worked, they worked!” Oeberst said, adding that of course not every single participant was persuaded that their memory was false.
Particularly with the false memory sensitization strategy, participants seemed to regain their trust in their initial gut feeling of what they did and didn’t remember, as if empowered to trust their own recollection more. “I don’t recollect this and maybe it’s not my fault, maybe it’s actually my parents who made something up or they were wrong,” Oeberst said, mimicking the participants’ thought process. “Basically, it’s a different solution to the same riddle.” According to Oeberst, the technique by which false memories are implanted is the same used to reverse them, “just from a different angle, the opposite angle.”
The memories didn’t completely vanish for everybody; 15% to 25% of the participants still believed their false memories were real, and this is roughly the same amount of people who accepted false memories right after the first interview. A year later, 74% of all participants still recognized which were false memories or didn’t remember them at all.
“Up until now, we didn’t have any way to reject or reverse false memory formation,” said Otgaar, who has published over 100 studies on false memory. “But it’s very simple, and with such a simple manipulation that this can already lead to quite strong effects. That’s really interesting.”
The researchers also suggest reframing thinking about false memories in terms of “false remembering,” an action determined by information and context, rather than “false memories,” as if memories were stable files in a computer.
“This is especially important, I think, insofar that remembering is always contextual. It’s less helpful for us to think about whether or not people ‘have’ a false memory and more helpful to think of the circumstances in which people are more or less likely to believe they are remembering,” said Nash.
SpaceX continued its rich tradition of destroying Starship prototypes with SN11 succumbing to an explosive end during a high-altitude flight test.
Originally planned for 29 March, the test flight from the company’s facility in Boca Chica, Texas, had been postponed until this morning because a Federal Aviation Administrator (FAA) had been unable reach the site in time to observe the test.
The inspector was present today to witness another demonstration of Tesla Technoking Elon Musk’s prowess at blowing up big, shiny rockets.
The test was a repeat of the Serial Number 10 prototype vehicle flight earlier in March. SN10 broke the heart of SpaceX fanbois around the globe by coming so close to complete success. That vehicle managed to return from its high-altitude test in one piece, landing upright. However, seconds later it exploded spectacularly, leaving the way clear (except for some bits of twisted metal) for SN11.
With SN10 almost succeeding, hopes were high for SN11.
The silver rocket, obscured by mist, launched on time. The three Raptor engines appeared to burn normally during the flight, with one shutting down just after the two-minute mark as planned. A second engine was then shut down before the vehicle reached the desired 10km point and the last engine was cut off.
Despite spotty video, the signature “belly flop” of the vehicle was visible as SN11 flipped over for its return to Earth. As it passed through 1km in altitude (according to the SpaceX announcer) the Raptors could be seen gimballing into position and at least one igniting.
And then the video froze again.
However, the audio continued for a few more seconds before a very audible bang was heard. Shortly after, SpaceX’s announcer returned to the air to confirm “another exciting test.”
In three years or so, the Wi-Fi specification is scheduled to get an upgrade that will turn wireless devices into sensors capable of gathering data about the people and objects bathed in their signals.
“When 802.11bf will be finalized and introduced as an IEEE standard in September 2024, Wi-Fi will cease to be a communication-only standard and will legitimately become a full-fledged sensing paradigm,” explains Francesco Restuccia, assistant professor of electrical and computer engineering at Northeastern University, in a paper summarizing the state of the Wi-Fi Sensing project (SENS) currently being developed by the Institute of Electrical and Electronics Engineers (IEEE).
SENS is envisioned as a way for devices capable of sending and receiving wireless data to use Wi-Fi signal interference differences to measure the range, velocity, direction, motion, presence, and proximity of people and objects.
It may come as no surprise that the security and privacy considerations of Wi-Fi-based sensing have not received much attention.
As Restuccia warns in his paper, “As yet, research and development efforts have been focused on improving the classification accuracy of the phenomena being monitored, with little regard to S&P [security and privacy] issues. While this could be acceptable from a research perspective, we point out that to allow widespread adoption of 802.11bf, ordinary people need to trust its underlying technologies. Therefore, S&P guarantees must be provided to the end users.”
[…]
“Indeed, it has been shown that SENS-based classifiers can infer privacy-critical information such as keyboard typing, gesture recognition and activity tracking,” Restuccia explains. “Given the broadcast nature of the wireless channel, a malicious eavesdropper could easily ‘listen’ to CSI [Channel State Information] reports and track the user’s activity without authorization.”
And worse still, he argues, such tracking can be done surreptitiously because Wi-Fi signals can penetrate walls, don’t require light, and don’t offer any visible indicator of their presence.
Restuccia suggests there needs to be a way to opt-out of SENS-based surveillance; a more privacy-friendly stance would be to opt-in, but there’s not much precedent for seeking permission in the technology industry.
In a recent released research paper, titled “Mobile Handset Privacy: Measuring The Data iOS and Android Send to Apple And Google” [PDF], Douglas Leith, chairman of computer systems in the school of computer science and statistics at Trinity College Dublin, Ireland, documents how iPhones and Android devices phone home regardless of the wishes of their owners.
According to Leith, Android and iOS handsets share data about their salient characteristics with their makers every 4.5 minutes on average.
“The phone IMEI, hardware serial number, SIM serial number and IMSI, handset phone number etc are shared with Apple and Google,” the paper says. “Both iOS and Google Android transmit telemetry, despite the user explicitly opting out of this.”
These transmissions occur even when the iOS Analytics & Improvements option is turned off and the Android Usage & Diagnostics option is turned off.
Such data may be considered personal information under privacy rules, depending upon the applicable laws and whether they can be associated with an individual. It can also have legitimate uses.
Of the two mobile operating systems, Android is claimed to be the more chatty: According to Leith, “Google collects a notably larger volume of handset data than Apple.”
Within 10 minutes of starting up, a Google Pixel handset sent about 1MB of data to Google, compared to 42KB of data sent to Apple in a similar startup scenario. And when the handsets sit idle, the Pixel will send about 1MB every 12 hours, about 20x more than the 52KB sent over the same period by an idle iPhone.
[…]
Leith’s tests excluded data related to services selected by device users, like those related to search, cloud storage, maps, and the like. Instead, they focused on the transmission of data shared when there’s no logged in user, including IMEI number, hardware serial number, SIM serial number, phone number, device ids (UDID, Ad ID, RDID, etc), location, telemetry, cookies, local IP address, device Wi-Fi MAC address, and nearby Wi-Fi MAC addresses.
This last category is noteworthy because it has privacy implications for other people on the same network. As the paper explains, iOS shares additional data: the handset Bluetooth UniqueChipID, the Secure Element ID (used for Apple Pay), and the Wi-Fi MAC addresses of nearby devices, specifically other devices using the same network gateway.
“When the handset location setting is enabled, these MAC addresses are also tagged with the GPS location,” the paper says. “Note that it takes only one device to tag the home gateway MAC address with its GPS location and thereafter the location of all other devices reporting that MAC address to Apple is revealed.”
[…]
Google also has a plausible fine-print justification: Leith notes that Google’s analytics options menu includes the text, “Turning off this feature doesn’t affect your device’s ability to send the information needed for essential services such as system updates and security.” However, Leith argues that this “essential” data is extensive and beyond reasonable user expectations.
As for Apple, you might think a company that proclaims “What happens on your iPhone stays on your iPhone” on billboards, and “Your data. Your choice,” on its website would want to explain its permission-defying telemetry. Yet the iPhone maker did not respond to a request for comment.
News that Ubiquiti’s cloud servers had been breached emerged on January 11, 2021, when the company emailed customers the text found in this support forum post. That missive stated: “We recently became aware of unauthorized access to certain of our information technology systems hosted by a third-party cloud provider.”
That announcement continued, “We have no indication that there has been unauthorized activity with respect to any user’s account,” but also recommended customers change their passwords because if their records had been accessed, hashed and salted passwords, email addresses, and even physical addresses and phone numbers could be at risk.
An update on Wednesday this week stated an investigation by outside experts “identified no evidence that customer information was accessed, or even targeted,” however.
Crucially, the update also revealed that someone “unsuccessfully attempted to extort the company by threatening to release stolen source code and specific IT credentials.” The update does not suggest the extortion attempt was fanciful.
Ubiquiti has not said when the external experts decided customer data was untouched. Which leaves the company in the interesting position of perhaps knowing its core IP has leaked, and not disclosing that, while also knowing that customer data is safe and not disclosing that, either.
The update contains another scary nugget in this sentence: “Please note that nothing has changed with respect to our analysis of customer data and the security of our products since our notification on January 11.”
But the January 11 notification makes no mention of “the security of our products.”
The update on Wednesday was published two days after Krebs On Securityreported that it has seen a letter from a whistleblower to the European Data Protection Supervisor that alleges Ubiquiti has not told the whole truth about the incident.
Krebs said the letter described the attack on Ubiquiti as “catastrophically worse than reported.”
“The breach was massive, customer data was at risk, access to customers’ devices deployed in corporations and homes around the world was at risk,” the letter reportedly claimed, adding that Ubiquiti’s legal team “silenced and overruled efforts to decisively protect customers.”
The whistleblower separately claimed that whoever was able to break into Ubiquiti’s Amazon-hosted servers, they could have swiped cryptographic secrets for customers’ single sign-on cookies and remote device access, internal source code, and signing keys – far more than the Wi-Fi box maker disclosed in January. The intruder, it is said, obtained a Ubiquiti IT worker’s privileged credentials, got root access to the business’s AWS systems, and thus had a potential free run of its cloud-hosted storage and databases.
Backdoors were apparently stashed in the servers, too, and, as Ubiquiti acknowledged this week, a ransom was demanded to keep quiet about the break-in.
[…]
The update ends with another call for customers to refresh their passwords and enable two-factor authentication. The Register fancies some readers may also consider refreshing their Wi-Fi supplier. ®
PS: It’s not been a great week for Ubiquiti: it just promised to remove house ads it added to the web-based user interface of its UniFi gear.
A tech CEO who lied to investors to get funding and then blew millions of it on maintaining a luxury lifestyle, which included private jets and top seats at sporting events, has been sentenced to just over eight years in prison.
Daniel Boice, 41, set up what he claimed would be the “Uber of private investigators,” called Trustify, in 2015. He managed to pull in over $18m in funding from a range of investors by lying about how successful the business was.
According to the criminal indictment [PDF] against him, investors received detailed financial statements that claimed Trustify was pulling in $500,000 a month and had hundreds of business relationships that didn’t exist. Boice also emailed, called, and texted potential investors claiming the same. But, prosecutors say, the truth was that the biz was making “significantly less” and the documentation was all fake.
The tech upstart started to collapse in November 2018 when losses mounted to the point where Boice was unable to pay his staff. When they complained, he grew angry, fired them, and cut off all company email and instant messaging accounts, they allege in a separate lawsuit [PDF] demanding unpaid wages.
Even as Trustify was being evicted from its office, however, Boice continued to lie to investors, claiming he had $18m in the bank when accounts show he had less than $10,000. Finally in 2019 the company was placed into corporate receivership, leading to over $18m in losses to investors and over $250,000 in unpaid wages.
As well as creating false income and revenue documents, Boice was found to have faked an email from one large investor saying that it was going to invest $7.5m in the business that same day – and then forwarded it to another investor as proof of interest. That investor then sank nearly $2m into the business.
Profligate
While the business was failing, however, Boice used millions invested in it to fund his own lifestyle. He put down deposits on two homes in the US – a $1.6m house in Virginia and a $1m beach house in New Jersey – using company funds. He also paid for a chauffeur, house manager, and numerous other personal expenses with Trustify cash. More money was spent on holidays, a $83,000 private jet flight to Vermont, and over $100,000 was spent on seats at various sporting events. His former employees also allege in a separate lawsuit that he spent $600,000 on a documentary about him and his wife.
When you buy an NFT for potentially as much as an actual house, in most cases you’re not purchasing an artwork or even an image file. Instead, you are buying a little bit of code that references a piece of media located somewhere else on the internet. This is where the problems begin. Ed Clements is a community manager for OpenSea who fields these kinds of problems daily. In an interview, he explained that digital artworks themselves are not immutably registered “on the blockchain” when a purchase is made. When you buy an artwork, rather, you’re “minting” a new cryptographic signature that, when decoded, points to an image hosted elsewhere. This could be a regular website, or it might be the InterPlanetary File System, a large peer-to-peer file storage system.
Clements distinguished between the NFT artwork (the image) and the NFT, which is the little cryptographic signature that actually gets logged. “I use the analogy of OpenSea and similar platforms acting like windows into a gallery where your NFT is hanging,” he said. “The platform can close the window whenever they want, but the NFT still exists and it is up to each platform to decide whether or not they want to close their window.” […] “Closing the window” on an NFT isn’t difficult. NFTs are rendered visually only on the front-end of a given marketplace, where you see all the images on offer. All the front-end code does is sift through the alphanumeric soup on the blockchain to produce a URL that links to where the image is hosted, or less commonly metadata which describes the image. According to Clement: “the code that finds the information on the blockchain and displays the images and information is simply told, ‘don’t display this one.'”
An important point to reiterate is that while NFT artworks can be taken down, the NFTs themselves live inside Ethereum. This means that the NFT marketplaces can only interact with and interpret that data, but cannot edit or remove it. As long as the linked image hasn’t been removed from its source, an NFT bought on OpenSea could still be viewed on Rarible, SuperRare, or whatever — they are all just interfaces to the ledger. The kind of suppression detailed by Clements is likely the explanation for many cases of “missing” NFTs, such as one case documented on Reddit when user “elm099” complained that an NFT called “Big Boy Pants” had disappeared from his wallet. In this case, the user could see the NFT transaction logged on the blockchain, but couldn’t find the image itself. In the case that an NFT artwork was actually removed at the source, rather than suppressed by a marketplace, then it would not display no matter which website you used. If you saved the image to your phone before it was removed, you could gaze at it while absorbing the aura of a cryptographic signature displayed on a second screen, but that could lessen the already-tenuous connection between NFT and artwork. If you’re unable to find a record of the token itself on the Ethereum blockchain, it “has to do with even more arcane Ethereum minutiae,” writes Ben Munster via Motherboard. He explains: “NFTs are generally represented by a form of token called the ERC-721. It’s just as simple to locate this token’s whereabouts as ether (Ethereum’s in-house currency) and other tokens such as ERC-20s. The NFT marketplace SuperRare, for instance, sends tokens directly to buyers’ wallets, where their movements can be tracked rather easily. The token can then generally be found under the ERC-721 tab. OpenSea, however, has been experimenting with a new new token variant: the ERC-1155, a ‘multitoken’ that designates collections of NFTs.
This token standard, novel as it is, isn’t yet compatible with Etherscan. That means ERC-1155s saved on Ethereum don’t show up, even if we know they are on the blockchain because the payments record is there, and the ‘smart contracts’ which process the sale are designed to fail instantly if the exchange can’t be made. […]”
In closing, Munster writes: “This is all illustrative of a common problem with Ethereum and cryptocurrencies generally, which despite being immutable and unhackable and abstractly perfect can only be taken advantage of via unreliable third-party applications.”
For years we’ve talked about how the fact that no one really understands privacy, leads to very bad attempts at regulating privacy in ways that do more harm than good. They often don’t do anything that actually protects privacy — and instead screw up lots of other important things, from competition to free speech. In fact, in some ways, there’s a big conflict between open internet systems and privacy. There are ways to get around that — usually by moving the data from centralized silos out towards the ends of the network — but that’s rarely happening in practice. I mean, going back over thirteen years ago, we were writing about the inherent conflict between Facebook’s (then) open social graph and privacy. Yet, at the time, Facebook was cheered on for opening up its social graph. It was creating a more “open” internet, an internet that others could build upon.
But, of course, over the years things have changed. A lot. In 2018, after the Cambridge Analytica scandal, Mark Zuckerberg more or less admitted that the world was telling Facebook to lock everything down again:
I do think early on on the platform we had this very idealistic vision around how data portability would allow all these different new experiences, and I think the feedback that we’ve gotten from our community and from the world is that privacy and having the data locked down is more important to people than maybe making it easier to bring more data and have different kinds of experiences.
As we pointed out in response — this was worrisome thinking, because it would likely take us away from a better world in which the data is more controlled by end users. Instead, so many people have now come to think that “protecting privacy” means making the big internet companies lock down our data rather than the much better approach which would be giving us full control over our own data. Those are two different things, that only sometimes look alike.
I say all of that as preamble in suggesting people read an excellent Protocol article by Issie Lapowsky, which — in a very thoughtful and nuanced way — highlights the unfortunate conflict between academic researchers trying to study the big internet companies and the companies’ insistence that they need to keep data private. We’ve touched on this topic before ourselves, in covering the still ongoing fight between Facebook and NYU regarding NYU’s Ad Observer project.
That project involves getting individuals to install a browser extension that shares data back to NYU about what ads the user sees. Facebook insists that it violates their privacy rules — and points to how much trouble it got in (and the massive fines it paid) over the Cambridge Analytica mess. Though, as we explained then, the scenarios are quite different.
Lapowsky’s article goes further — noting how Facebook told her that the Ad Observer project was collecting data without the user’s permission, which worried the PhD student who was working on the project. It turns out that was false. The project only collects data from the user who installs it and agrees (giving permission) to collect the data in question.
But the story and others in the article highlight an unfortunate situation: the somewhat haphazard demands on the big internet companies to “protect privacy” are now providing convenient excuses to those same companies to shut down academic research on those companies and their practices. In some cases there are legitimate concerns. For example, as the article notes, there were concerns about how much Facebook is willing to share regarding ad targeting. That information could be really important for those studying disinformation or civil rights issues. But… it could also be used in nefarious ways:
Facebook released an API for its political ad archive and invited the NYU team to be early testers. Using the API, Edelson and McCoy began studying the spread of disinformation and misinformation through political ads and quickly realized that the dataset had one glaring gap: It didn’t include any data on who the ads were targeting, something they viewed as key to understanding advertisers’ malintent. For example, last year, the Trump campaign ran an ad envisioning a dystopian post-Biden presidency, where the world is burning and no one answers 911 calls due to “defunding of the police department.” That ad, Edelson found, had been targeted specifically to married women in the suburbs. “I think that’s relevant context to understanding that ad,” Edelson said.
But Facebook was unwilling to share targeting data publicly. According to Satterfield, that could make it too easy to reverse-engineer a person’s interests and other personal information. If, for instance, a person likes or comments on a given ad, it wouldn’t be too hard to check the targeting data on that ad, if it were public, and deduce that that person meets those targeting criteria. “If you combine those two data sets, you could potentially learn things about the people who engaged with the ad,” Satterfield said.
Legitimate concern… but also allows the company to shield data that could be really useful to academics. Of course, it doesn’t help that so many people are so distrustful of these big companies that no matter what they do it will be portrayed — sometimes by the very same people — as evil. It was just a few weeks ago that we saw people screaming both about the big internet companies willing to cave in and pay Rupert Murdoch the Australian link tax… and when they refused to. Both options were painted as evil.
So, sharing data will inevitably be presented by some as violating people’s privacy, while not sharing data will be presented as hiding from researchers and trying to avoid transparency. And there’s probably some truth in every angle to these stories.
Of course, that all leaves out a better approach that these companies could do: give more power to the end users themselves to control their own data. Let the users decide what data is shared and what is not. Let the users decide where and how that data is stored (even if it’s not on the platform itself). But, instead, we just have people yelling about how these companies both have to protect everyone’s privacy and give access to researchers to see what they’re doing with all this data. I don’t think the “middle ground” laid out in the article is all that tenable. Right now it’s just to basically create special exceptions in which academics are “allowed” — under strict conditions — to get access to that data.
The problem with that framing is that the big internet companies still end up in control of the data, rather than the end users. The situation with NYU seems like a perfectly good example. Facebook shouldn’t have to share data from people who don’t consent, but with the Ad Observer, it’s all people who are actually consenting to handing over their own data, and Facebook shouldn’t be in the business of blocking that — even if it’s inevitable that some reporter at some future date will try to spin that into a story claiming that Facebook “violated” privacy because these researchers convinced people to turn over their own info.
The argument Mike makes above is basically a plea for what Sir Tim Berners Lee, inventor of the internet is pleading for and already making in his companies Solid and Inrupt. User data is placed in personal Pods / Silos and the user can determine what data is given to who.
It’s an idealistic scenario that seems to ignore a few things:
who hosts the pods? the hoster can usually see into things or at any rate gather metadata (which is usually more valuable than the actual data). Who pays for hosting the pods?
will people understand and be willing to take the time to curate their pod access? people have trouble finding privacy settings on their social networks, this promises to be more complex
if a site requires access to data in a pod, won’t people blindly click on accept without understanding that they are giving away their data? Or will they be coerced into giving away data they don’t want because there are no alternatives to using the service?
OpenSSL, the most widely used software library for implementing website and email encryption, has patched a high-severity vulnerability that makes it easy for hackers to completely shut down huge numbers of servers.
[…]
On Thursday, OpenSSL maintainers disclosed and patched a vulnerability that causes servers to crash when they receive a maliciously crafted request from an unauthenticated end user. CVE-2021-3449, as the denial-of-server vulnerability is tracked, is the result of a null pointer dereference bug. Cryptographic engineer Filippo Valsorda said on Twitter that the flaw could probably have been discovered earlier than now.
“Anyway, sounds like you can crash most OpenSSL servers on the Internet today,” he added.
Hackers can exploit the vulnerability by sending a server a maliciously formed renegotiating request during the initial handshake that establishes a secure connection between an end user and a server.
“An OpenSSL TLS server may crash if sent a maliciously crafted renegotiation ClientHello message from a client,” maintainers wrote in an advisory. “If a TLSv1.2 renegotiation ClientHello omits the signature_algorithms extension (where it was present in the initial ClientHello), but includes a signature_algorithms_cert extension then a NULL pointer dereference will result, leading to a crash and a denial of service attack.”
The maintainers have rated the severity high. Researchers reported the vulnerability to OpenSSL on March 17. Nokia developers Peter Kästle and Samuel Sapalski provided the fix.
Certificate verification bypass
OpenSSL also fixed a separate vulnerability that, in edge cases, prevented apps from detecting and rejecting TLS certificates that aren’t digitally signed by a browser-trusted certificate authority. The vulnerability, tracked as CVE-2021-3450, involves the interplay between a X509_V_FLAG_X509_STRICT flag found in the code and several parameters.
Thursday’s advisory explained:
If a “purpose” has been configured then there is a subsequent opportunity for checks that the certificate is a valid CA. All of the named “purpose” values implemented in libcrypto perform this check. Therefore, where a purpose is set the certificate chain will still be rejected even when the strict flag has been used. A purpose is set by default in libssl client and server certificate verification routines, but it can be overridden or removed by an application.
In order to be affected, an application must explicitly set the X509_V_FLAG_X509_STRICT verification flag and either not set a purpose for the certificate verification or, in the case of TLS client or server applications, override the default purpose.
An F-35B Joint Strike Fighter shot itself in the skies above Arizona earlier this month, doing at least $2.5 million in damage. The pilot was unharmed and successfully landed the jet. The Pentagon isn’t quite sure how or why the jet shot itself and the incident is still under investigation.
As first reported by Military.com, the F-35 was flying in a training mission at night on March 12 at the Yuman Range Complex in Arizona when it shot itself. This particular F-35 has an externally mounted gatling gun that fires a 25mm armor piercing high explosive round. Sometime during the training, the gun discharged and the round exploded, damaging the underside of the jet.
The pilot landed the jet and a Navy investigation classified the accident as Class A. Class A accidents are the most severe, it’s a classification used when someone in the weapon dies, the whole jet is lost, or the property damage is $2.5 million or greater. “The mishap did not result in any injury to personnel, and an investigation of the incident is currently taking place,” Marine Corps spokesperson Captain Andrew Wood told Military.com.
Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation)
Impact: Processing maliciously crafted web content may lead to universal cross site scripting. Apple is aware of a report that this issue may have been actively exploited.
Description: This issue was addressed by improved management of object lifetimes.
CVE-2021-1879: Clement Lecigne of Google Threat Analysis Group and Billy Leonard of Google Threat Analysis Group