The Linkielist

Linking ideas with the world

The Linkielist

Slack Will Begin Deleting Older Content From Free Workspaces

Slack announced a significant change to its platform, saying it will “begin deleting messages and files more than one year old from free workspaces on a rolling basis.”

Slack’s prior policy involved keeping messages and files for the lifetime of a free workspace, although accessing that full history required switching to a paid account. Under the new policy, Slack reserves the right to delete content from free workspaces after one year.

Slack will no longer keep messages and files for the lifetime of your free workspace. Starting August 26, 2024, Customer Data — such as messages and file history — older than one year may be deleted on a rolling basis from workspaces on the free plan, following the terms described in the Main Services Agreement and Trust and Compliance Documentation.

If you choose to remain on a free workspace, you’ll have full access to the past 90 days of message and file history, and the remaining 275 days will become available should you upgrade to a paid plan. If you decide to upgrade, we’ll store messages and files based on your chosen retention period, with an option to keep all history.

Users interested in keeping their full history of content should upgrade to a paid workspace before August 26, 2024. Once deletion occurs, messages and files cannot be recovered.

Source: Slack Will Begin Deleting Older Content From Free Workspaces

This is a problem with cloud services – you do not own or manage the data or the rules with which it is kept.

Microsoft admits no guarantee that UK policing data will stay in the UK and at all private – are you looking, EU member states?!

According to correspondence released by the Scottish Police Authority (SPA) under freedom of information (FOI) rules, Microsoft is unable to guarantee that data uploaded to a key Police Scotland IT system – the Digital Evidence Sharing Capability (DESC) – will remain in the UK as required by law.

While the correspondence has not been released in full, the disclosure reveals that data hosted in Microsoft’s hyperscale public cloud infrastructure is regularly transferred and processed overseas; that the data processing agreement in place for the DESC did not cover UK-specific data protection requirements; and that while the company has the ability to make technical changes to ensure data protection compliance, it is only making these changes for DESC partners and not other policing bodies because “no one else had asked”.

The correspondence also contains acknowledgements from Microsoft that international data transfers are inherent to its public cloud architecture. As a result, the issues identified with the Scottish Police will equally apply to all UK government users, many of whom face similar regulatory limitations on the offshoring of data.

[…]

Nicky Stewart, a former ICT chief at the UK government’s Cabinet Office, said most people with knowledge of how hyperscale public cloud works have known about these data sovereignty issues for years.

“It’s clearly going to be a concern to any police force that’s using Microsoft, but it’s wider than that,” she said, adding that while Part 3 of the Data Protection Act (DPA) 2018 clearly stipulates that law enforcement data needs to be kept in the UK, other kinds of public sector data must also be kept sovereign under the new G-Cloud 14 framework, which has introduced a UK-only data hosting requirement.

[…]

Microsoft’s commitment to not access customer data without permission is further complicated by the terms of service, which make that promise strictly conditional by giving the company the ability to access data without permission if they either have to fulfil a legal burden, such as responding to government requests for data, or to maintain the service.

[…]

He added that given Microsoft’s disclosures to the SPA, “it must now be obvious that M365 and Azure Cloud services do not meet the two key requirements” to be a legal processor or sub-processor of law enforcement data under the DPA 18.

“These are: one, to conduct all processing and support activities 100% from inside the UK; and two, to only make an international transfer if they are specifically instructed to make the particular transfer by the controller,” he said.

“Microsoft have confirmed that they do not and cannot commit to requirement one for their M365 services, or indeed for most of the services they operate and support in Azure. They have also said that they cannot ‘operationalise’ individual requests as required of them under section 59(7) of the act, thus failing to meet requirement two.

“There can be no clearer evidence than Microsoft’s own clarifications that they cannot meet the legal requirements for a processor or sub-processor of law enforcement data.”

Stewart said: “If it’s not possible to understand the simple question, ‘do you know where your data is all the time?’, then you probably shouldn’t be putting your data in that platform.”

[…]

Source: Microsoft admits no guarantee of sovereignty for UK policing data | Computer Weekly

With the EU and also some EU domain name registrars (looking at you, SIDN) working with these crazy cloud providers, it should have been blindingly obvious that putting data in a US cloud provider would open it up for US spying and a complete lack of data ownership. However idiots will be idiots.

Google Cloud accidentally deletes UniSuper’s online account with 620k customers due to ‘unprecedented misconfiguration’

More than half a million UniSuper fund members went a week with no access to their superannuation accounts after a “one-of-a-kind” Google Cloud “misconfiguration” led to the financial services provider’s private cloud account being deleted, Google and UniSuper have revealed.

Services began being restored for UniSuper customers on Thursday, more than a week after the system went offline. Investment account balances would reflect last week’s figures and UniSuper said those would be updated as quickly as possible.

The UniSuper CEO, Peter Chun, wrote to the fund’s 620,000 members on Wednesday night, explaining the outage was not the result of a cyber-attack, and no personal data had been exposed as a result of the outage. Chun pinpointed Google’s cloud service as the issue.

In an extraordinary joint statement from Chun and the global CEO for Google Cloud, Thomas Kurian, the pair apologised to members for the outage, and said it had been “extremely frustrating and disappointing”.

They said the outage was caused by a misconfiguration that resulted in UniSuper’s cloud account being deleted, something that had never happened to Google Cloud before.

“Google Cloud CEO, Thomas Kurian has confirmed that the disruption arose from an unprecedented sequence of events whereby an inadvertent misconfiguration during provisioning of UniSuper’s Private Cloud services ultimately resulted in the deletion of UniSuper’s Private Cloud subscription,” the pair said.

“This is an isolated, ‘one-of-a-kind occurrence’ that has never before occurred with any of Google Cloud’s clients globally. This should not have happened. Google Cloud has identified the events that led to this disruption and taken measures to ensure this does not happen again.”

While UniSuper normally has duplication in place in two geographies, to ensure that if one service goes down or is lost then it can be easily restored, because the fund’s cloud subscription was deleted, it caused the deletion across both geographies.

UniSuper was able to eventually restore services because the fund had backups in place with another provider.

“These backups have minimised data loss, and significantly improved the ability of UniSuper and Google Cloud to complete the restoration,” the pair said.

[…]

Source: Google Cloud accidentally deletes UniSuper’s online account due to ‘unprecedented misconfiguration’ | Superannuation | The Guardian

Sony Shuts Down LittleBigPlanet 3 Servers, destroying Fan Creations – don’t trust the cloud

Sony has indefinitely decommissioned the PlayStation 4 servers for puzzle platformer LittleBigPlanet 3, the company announced in an update to one of its support pages. The permanent shutdown comes just months after the servers were temporarily taken offline due to ongoing issues. Fans now fear potentially hundreds of thousands of player creations not saved locally will be lost for good.

“Due to ongoing technical issues which resulted in the LittleBigPlanet 3 servers for PlayStation 4 being taken offline temporarily in January 2024, the decision has been made to keep the servers offline indefinitely,” Sony wrote in the update, first spotted by Delisted Games. “All online services including access to other players’ creations for LittleBigPlanet 3 are no longer available.”

The 2014 sequel starring Sackboy and other crafted creatures was beloved for the creativity and flexibility it afforded players to create their own platforming levels. The game’s offline features will remain available, as will user-generated content stored locally. Players won’t be able to share them, though, or access any data that was stored on Sony’s servers, which likely made up the majority of user-generated content for the game.

While the servers for the PS3 version of the game were originally shut down in 2021 due to ongoing DDOS attacks, the PS4 servers remained open up until January of 2024 when malicious mods threatened the game’s security. “We are temporarily taking the LittleBigPlanet servers offline whilst we investigate a number of issues that have been reported to us,” the game’s Twitter account announced at the time. “If you have been impacted by these issues, please be rest assured that we are aware of them and are working to resolve them for all affected.”

Some players were worried the closure might become permanent. It now seems they were right.

“Nearly 16 years worth of user generated content, millions of levels, some with millions of plays and hearts,” wrote one long-time player, Weeni-Tortellini, on Reddit in January. “Absolutely iconic levels locked away forever with no way to experience them again. To me, the servers shutting down is a hefty chunk bitten out of LittleBigPlanet’s history. I personally have many levels I made as a kid. Digital relics of what made me as creative as i am today, and The only access to these levels i have is thru the servers. I would be devastated if I could never experience them again.”

The permanent shutdown comes as online services across many other older games are retired as well. Nintendo took online multiplayer for Wii U and 3DS games offline earlier this month, impacting games like Splatoon and Animal Crossing: New Leaf. Ubisoft came under fire last week for not just shutting off servers for always-online racing game The Crew, but revoking PC players’ licenses to the game itself as well.

“This is naturally a very sad day for all of us involved with LittleBigPlanet and I have no doubt that many feel the same,” tweeted community manager Steven Isbell. “I’m still here to listen to you all though and will take time over the coming weeks to reach out to the community and listen to anyone that wants to talk.”

Source: Sony Shuts Down LittleBigPlanet 3 Servers, Nuking Fan Creations

European Commission broke data protection law with Microsoft Office 365 – duh

The European Commission has been reprimanded for infringing data protection regulations when using Microsoft 365.

The rebuke came from the European Data Protection Supervisor (EDPS) and is the culmination of an investigation that kicked off in May 2021, following the Schrems II judgement.

According to the EDPS, the EC infringed several data protection regulations, including rules around transferring personal data outside the EU / European Economic Area (EEA.)

According to the organization, “In particular, the Commission has failed to provide appropriate safeguards to ensure that personal data transferred outside the EU/EEA are afforded an essentially equivalent level of protection as guaranteed in the EU/EEA.

“Furthermore, in its contract with Microsoft, the Commission did not sufficiently specify what types of personal data are to be collected and for which explicit and specified purposes when using Microsoft 365.”

While the concerns are more about EU institutions and transparency, they should also serve as notice to any company doing business in the EU / EEA to take a very close look at how it has configured Microsoft 365 regarding the EU Data Protection Regulations.

[…]

Source: European Commission broke data protection law with Microsoft • The Register

Who knew? An American Company running an American cloud product on American Servers and the EU was putting it’s data on it. Who would have thought that might end up in America?!

Millions of research papers at risk of disappearing from the Internet

More than one-quarter of scholarly articles are not being properly archived and preserved, a study of more than seven million digital publications suggests. The findings, published in the Journal of Librarianship and Scholarly Communication on 24 January1, indicate that systems to preserve papers online have failed to keep pace with the growth of research output.

“Our entire epistemology of science and research relies on the chain of footnotes,” explains author Martin Eve, a researcher in literature, technology and publishing at Birkbeck, University of London. “If you can’t verify what someone else has said at some other point, you’re just trusting to blind faith for artefacts that you can no longer read yourself.”

[…]

The sample of DOIs included in the study was made up of a random selection of up to 1,000 registered to each member organization. Twenty-eight per cent of these works — more than two million articles — did not appear in a major digital archive, despite having an active DOI. Only 58% of the DOIs referenced works that had been stored in at least one archive. The other 14% were excluded from the study because they were published too recently, were not journal articles or did not have an identifiable source.

Preservation challenge

Eve notes that the study has limitations: namely that it tracked only articles with DOIs, and that it did not search every digital repository for articles (he did not check whether items with a DOI were stored in institutional repositories, for example).

[…]

“Everybody thinks of the immediate gains they might get from having a paper out somewhere, but we really should be thinking about the long-term sustainability of the research ecosystem,” Eve says. “After you’ve been dead for 100 years, are people going to be able to get access to the things you’ve worked on?”

doi: https://doi.org/10.1038/d41586-024-00616-5

Source: Millions of research papers at risk of disappearing from the Internet

Satellites Step Up After Red Sea Internet Cables Get Severed

[…] Earlier this week, four out of 15 communication cables were cut, disrupting network traffic that flows through the Red Sea. The damaged cables affected 25% of traffic between Asia, Europe, and the Middle East, according to Hong Kong telecoms company HGC Global Communications. The cause of the damage is still unknown, and the company is working on a fix, which it referred to as an “exceptionally rare occurrence.” Although HGC did not reveal the cause behind the damaged cables, a U.S. National Security Council spokesperson blamed it on the anchor of a cargo ship that was sunk by the Houthi group in Yemen. The Houthis, however, issued a statement denying its involvement.

Regardless of the cause, satellite companies have stepped up by beaming connectivity from space to reroute some of that impacted traffic. Satellite operators such as Intelsat are providing back up connectivity to fill in the gaps for the severed cables, SpaceNews reported.

Intelsat has a fleet of 52 communication satellites in orbit, providing broadband internet and offering airline passengers inflight connectivity. Other companies, like Eutelsat OneWeb, SES, and, more famously, SpaceX are also in the business of beaming connectivity from Earth orbit.

The recent incident, although rare, does offer a glimpse into what a hybrid connectivity solution would look like, providing internet from both underwater cables, as well as orbital satellites. Subsea customers, or those getting internet from both ends, can restore their connectivity within 15 minutes should there be an issue with a terrestrial provider, Rhys Morgan, regional vice president for Intelsat, told SpaceNews.

[…]

Source: Satellites Step Up After Red Sea Internet Cables Get Severed

Wyze says camera breach let 13,000 customers briefly see into other people’s homes

Last week, co-founder David Crosby said that “so far” the company had identified 14 people who were able to briefly see into a stranger’s property because they were shown an image from someone else’s Wyze camera. Now we’re being told that number of affected customers has ballooned to 13,000.

The revelation came from an email sent to customers entitled “An Important Security Message from Wyze,” in which the company copped to the breach and apologized, while also attempting to lay some of the blame on its web hosting provider AWS.

“The outage originated from our partner AWS and took down Wyze devices for several hours early Friday morning. If you tried to view live cameras or Events during that time, you likely weren’t able to. We’re very sorry for the frustration and confusion this caused.

The breach, however, occurred as Wyze was attempting to bring its cameras back online. Customers were reporting seeing mysterious images and video footage in their own Events tab. Wyze disabled access to the tab and launched its own investigation.

As it did before, Wyze is chalking up the incident to “a third-party caching client library” that was recently integrated into its system.

This client library received unprecedented load conditions caused by devices coming back online all at once. As a result of increased demand, it mixed up device ID and user ID mapping and connected some data to incorrect accounts.

But it was too late to prevent an estimated 13,000 people from getting an unauthorized peek at thumbnails from a stranger’s homes. Wyze says that 1,504 people tapped to enlarge the thumbnail, and that a few of them caught a video that they were able to view. It also claims that all impacted users have been notified of the security breach, and that over 99 percent of all of its customers weren’t affected.

[…]

Source: Wyze says camera breach let 13,000 customers briefly see into other people’s homes – The Verge

Which it’s better to store stuff on your own NAS hardware instead of some vendor’s cloud.

Apple Pay, Apple Card and Wallet were down for some users this morning – again

Apple’s financial services, including Apple Pay, Apple Cash, Apple Card and Wallet, experienced service disruptions for some users between 6:15 AM and 6:49 AM Eastern this morning, according to the company’s System Status page. As AppleInsider notes, it’s unclear how widespread the issues were, but the company has experienced intermittent Apple Pay issues earlier this year.

[…]

Source: Apple Pay, Apple Card and Wallet were down for some users this morning

Google calls Drive data loss “fixed,” locks forum threads saying otherwise

Google is dealing with its second “lost data” fiasco in the past few months. This time, it’s Google Drive, which has been mysteriously losing files for some people. Google acknowledged the issue on November 27, and a week later, it posted what it called a fix.

It doesn’t feel like Google is describing this issue correctly; the company still calls it a “syncing issue” with the Drive desktop app versions 84.0.0.0 through 84.0.4.0. Syncing problems would only mean files don’t make it to or from the cloud, and that doesn’t explain why people are completely losing files. In the most popular issue thread on the Google Drive Community forums, several users describe spreadsheets and documents going missing, which all would have been created and saved in the web interface, not the desktop app, and it’s hard to see how the desktop app could affect that. Many users peg “May 2023” as the time documents stopped saving. Some say they’ve never used the desktop app.

[…]

Google’s recovery instructions outline a few ways to attempt to “recover your files.” One is via a new secret UI in the Google Drive desktop app version 85.0.13.0 or higher. If you hold shift while clicking on the Drive system tray/menu bar icon, you’ll get a special debug UI with an option to “Recover from backups.” Google says, “Once recovery is complete, you’ll see a new folder on your desktop with the unsynced files named Google Drive Recovery.” Google doesn’t explain what this does or how it works.

Option No. 2 is surprising: use of the command line to recover files. The new Drive binary comes with flags for ‘–recover_from_account_backups’ and ‘–recover_from_app_data_path’, which tells us a bit about what is going on. When Google first acknowledged the issue, it warned users not to delete or move Drive’s app data folder. These flags from the recovery process make it sound like Google hopes your missing files will be in the Drive cache somewhere. Google also suggests trying Windows Backup or macOS Time Machine to find your files.

Google locked the issue thread on the Drive Community Forums at 170 replies before it was clear the problem was solved. It’s also marking any additional threads as “duplicates” and locking them.

[…]

Of the few replies before Google locked the thread, most suggested that Google’s fix did not work. One user calls the fix “complete BS,” adding, “The “solution” doesn’t work for most people.” Another says, “Google Drive DELETED my files so they are not available for recovery. This “fix” is not a fix!” There are lots of other reports of the fix not working, and not many that say they got their files back. The idea that Drive would have months-old copies of files in the app data folder is hard to believe.

[…]

Source: Google calls Drive data loss “fixed,” locks forum threads saying otherwise | Ars Technica

Months of Google Drive files disappearing randomly

Google Drive users are reporting files mysteriously disappearing from the service, with some netizens on the goliath’s support forums claiming six or more months of work have unceremoniously vanished.

The issue has been rumbling for a few days, with one user logging into Google Drive and finding things as they were in May 2023.

According to the poster, almost everything saved since then has gone, and attempts at recovery failed.

Others chimed in with similar experiences, and one claimed that six months of business data had gone AWOL.

There is little information regarding what has happened; some users reported that synchronization had simply stopped working, so the cloud storage was out of date. Others could get some of their information back by fiddling with cached files, although the limited advice on offer for the affected was to leave things well alone until engineers come up with a solution.

A message purporting to be from Google support also advised not to make changes to the root/data folder while engineers investigate the issue.

[…]

a reminder that just because files are being stored in the cloud, there is no guarantee that they are safe. European cloud hosting provider OVH suffered a disastrous fire in 2021 that left some customers scrambling for backups and disaster recovery plans.

[…]

ust because the files have been uploaded one day does not necessarily mean they will still be there – or recoverable – the next.

[…]

MatthewSt reports that he has a fix; obviously this is something worked out by a user rather than official advice, so caution is advised.

Source: The mystery of the disappearing Google Drive files • The Register

Rivian update bricks infotainment – corp comms quickly and publicly on Reddit

Hi All,

We made an error with the 2023.42 OTA update – a fat finger where the wrong build with the wrong security certificates was sent out. We cancelled the campaign and we will restart it with the proper software that went through the different campaigns of beta testing.

Service will be contacting impacted customers and will go through the resolution options. That may require physical repair in some cases.

This is on us – we messed up. Thanks for your support and your patience as we go through this.

* Update 1 (11/13, 10:45 PM PT): The issue impacts the infotainment system. In most cases, the rest of the vehicle systems are still operational. A vehicle reset or sleep cycle will not solve the issue. We are validating the best options to address the issue for the impacted vehicles. Our customer support team is prioritizing support for our customers related to this issue. Thank you.

*Update 2 (11/14, 11:30 AM PT): Hi all, As I mentioned yesterday, we identified an issue in our recent software update 2023.42.0 that impacted the infotainment system on a number of R1T and R1S vehicles. In most cases, the rest of the vehicle systems and the mobile app will remain functional. If you’re an impacted owner, you should have received an email and a text communication. We understand that this is frustrating and we are really sorry for this inconvenience. The team continues to actively work on the best possible solution to fix the impacted vehicles, and we will keep the community updated. In the meantime, our Service team is prioritizing this issue and you can reach out to them at 1-855-748-4265.

*Update 3 (11/14, 7 PM PT): We just emailed the impacted owners with next steps. The team managed to build a solution, and we will start rolling it out tomorrow.

*Update 4 (11/15 11:30 AM PT): the team has been able to build a solution that fixes the issue remotely. Roll out starting today. Thanks to the community for the support.

Source: 2023.42 OTA Update Issue : Rivian

As far as I am concerned well done – everyone was kept informed and a tough problem to fix was rolled out fairly quickly. Mistakes happen everywhere, so it’s more important that they are fixed and that people are informed.

It does, however, highlight the security issues of automatic updates.

Microsoft admits ‘power issue’ downed Azure in West Europe

Microsoft techies are trying to recover storage nodes for a “small” number of customers following a “power issue” on October 20 that triggered Azure service disruptions and ruined breakfast for those wanting to use hosted virtual machines or SQL DB.

The degradation began at 0731 UTC on Friday when Microsoft spotted the unspecified power problem, which affected infrastructure in one Availability Zone in the West Europe region. As such, businesses using VMs, Storage, App Service, or Cosmos and SQL DB suffered interruptions.

So what caused this unplanned downtime session? Microsoft says in an incident report on its Azure status history page: “Due to an upstream utility disturbance, we moved to generator power for a section of one datacenter at approximately 0731 UTC. A subset of those generators supporting that section failed to take over as expected during the switch over from utility power, resulting in the impact.”

Engineers managed to restore power again at around 0800 UTC and the impacted infrastructure began to clamber back online again. When the networking and storage plumbing recovered, compute scale units were brought into service, and for the “vast majority” the Azure services were accessible again from 0915 UTC.

Yet not everyone was up and running smoothly, Microsoft admitted.

“A small amount of storage nodes needs to be recovered manually, leading to delays in recovery for some services and customers. We are working to recover these nodes and will continue to communicate to these impacted customers directly via the Service Health blade in the Azure Portal.”

Source: Microsoft admits ‘power issue’ downed Azure in West Europe • The Register

Microsoft Comes Under Blistering Criticism For ‘Grossly Irresponsible’ Azure Security

An anonymous reader quotes a report from Ars Technica: Microsoft has once again come under blistering criticism for the security practices of Azure and its other cloud offerings, with the CEO of security firm Tenable saying Microsoft is “grossly irresponsible” and mired in a “culture of toxic obfuscation.” The comments from Amit Yoran, chairman and CEO of Tenable, come six days after Sen. Ron Wyden (D-Ore.) blasted Microsoft for what he said were “negligent cybersecurity practices” that enabled hackers backed by the Chinese government to steal hundreds of thousands of emails from cloud customers, including officials in the US Departments of State and Commerce. Microsoft has yet to provide key details about the mysterious breach, which involved the hackers obtaining an extraordinarily powerful encryption key granting access to a variety of its other cloud services. The company has taken pains ever since to obscure its infrastructure’s role in the mass breach.

On Wednesday, Yoran took to LinkedIn to castigate Microsoft for failing to fix what the company said on Monday was a “critical” issue that gives hackers unauthorized access to data and apps managed by Azure AD, a Microsoft cloud offering for managing user authentication inside large organizations. Monday’s disclosure said that the firm notified Microsoft of the problem in March and that Microsoft reported 16 weeks later that it had been fixed. Tenable researchers told Microsoft that the fix was incomplete. Microsoft set the date for providing a complete fix to September 28.

“To give you an idea of how bad this is, our team very quickly discovered authentication secrets to a bank,” Yoran wrote. “They were so concerned about the seriousness and the ethics of the issue that we immediately notified Microsoft.” He continued: “Did Microsoft quickly fix the issue that could effectively lead to the breach of multiple customers’ networks and services? Of course not. They took more than 90 days to implement a partial fix — and only for new applications loaded in the service.” In response, Microsoft officials wrote: “We appreciate the collaboration with the security community to responsibly disclose product issues. We follow an extensive process involving a thorough investigation, update development for all versions of affected products, and compatibility testing among other operating systems and applications. Ultimately, developing a security update is a delicate balance between timeliness and quality, while ensuring maximized customer protection with minimized customer disruption.” Microsoft went on to say that the initial fix in June “mitigated the issue for the majority of customers” and “no customer action is required.”

In a separate email, Yoran responded: “It now appears that it’s either fixed, or we are blocked from testing. We don’t know the fix, or mitigation, so hard to say if it’s truly fixed, or Microsoft put a control in place like a firewall rule or ACL to block us. When we find vulns in other products, vendors usually inform us of the fix so we can validate it effectively. With Microsoft Azure that doesn’t happen, so it’s a black box, which is also part of the problem. The ‘just trust us’ lacks credibility when you have the current track record.”

Source: Microsoft Comes Under Blistering Criticism For ‘Grossly Irresponsible’ Security – Slashdot

A great example of why a) closed source software is a really bad idea, b) why responsible disclosure is a good idea and c) why cloud is often a bad idea

Microsoft confirms June Outlook and OneDrive outages were caused by DDoS attacks

Earlier this month, a group known as Anonymous Sudan took credit for a service outage that disrupted access to Outlook, OneDrive and a handful of other Microsoft online services. After initially sharing little information about the incident, the company confirmed late Friday it had been the target of a series of distributed denial-of-service attacks. In a blog post spotted by the Associated Press (via The Verge), Microsoft said the attacks “temporarily impacted” the availability of some services, adding they were primarily designed to generate “publicity” for a threat actor the company has dubbed Storm-1359. Under Microsoft’s threat actor naming convention, Storm is a temporary designator the company employs for groups whose affiliation it hasn’t definitively established yet.

“We have seen no evidence that customer data has been accessed or compromised,” the company said.

[…]

Source: Microsoft confirms June Outlook and OneDrive outages were caused by DDoS attacks | Engadget

Microsoft 365 and Teams hit in global partial outage – again

[…]

The problem kicked off this morning with Redmond saying it was looking into errors within its caching infrastructure. In an advisory, the Windows goliath wrote “some users may be intermittently unable to view or access web apps in Microsoft 365.”

A range of Microsoft 365 online services are affected, such as Excel, the company wrote, adding “the search bar may not appear in any Office Online service.” Others impacted include Teams admin centers, SharePoint Online (users may not be able to view the settings gear, search bar, and waffle), and Planner.

According to DownDetector, complaints of the outage began to spike before 0900 ET (1300 UTC). There’s no sign of any resumption in services for the time being.

The software giant initially indicated the problem was linked to an “unusually high number of timeout exceptions within our caching and our Azure Active Directory (AAD) infrastructure.” It soon updated that its engineers had narrowed down a cause.

“We determined that a section of caching infrastructure is performing below acceptable performance thresholds, causing calls to gather user licensing information to bypass the cache and go directly to Azure Active Directory infrastructure, resulting in high resource utilization, resulting in throttling and impact,” Redmond wrote in an advisory.

[…]

Microsoft has battled its share of outages in recent months. A code change caused a four-hour outage of Azure Resource Manager in Europe in March and a month earlier Outlook was knocked out for a while.

In January, Microsoft had to roll back a network change in its WAN after it cause problems a range of cloud services, including Exchange Online, Teams, Outlook, and OneDrive for Business.

[…]

Source: Microsoft 365 and Teams hit in global partial outage • The Register

Outlook attachments count toward OneDrive capacity so MS may just turn off your email

Some users of Microsoft’s free Outlook hosted service are finding they can no longer send or receive emails because of how the Windows giant now calculates the storage of attachments.

Microsoft account holders are allowed to hold up to 15GB in their cloud-hosted email, which until recently included text and attachments, and 5GB in their OneDrive storage. That policy changed February 1. Since then, attachments now count as part of the 5GB OneDrive allowance – and if that amount is exceeded, it throws a wrench into the email service.

It doesn’t change the storage amount available in Outlook.com, but could in OneDrive.

“This update may reduce how much cloud storage you have available to use with your OneDrive,” Microsoft wrote in a support note posted before the change. “If you reach your cloud storage quota, your ability to send and receive emails in Outlook.com will be disrupted.”

Redmond added that the plan was to gradually roll out the cloud storage changes and new quota bar starting February 1 across users’ app and Windows settings and Microsoft accounts. Two months later, that gradual rollout is beginning to hit more and more users.

One reader told The Register that his Outlook recently stopped working and indicated that he had surpassed the 5GB storage limit, reaching 6.1GB. He was unaware of the policy change, so he was confused when he saw that in his email account he had used only 6.8GB of the 15GB allowed.

It was the change in how attachments are added that tripped him up. Microsoft told him about the new policy.

No one deletes attachments every time an email is received. This is like blackmail
“So instantly, I have lost 10GB of email capacity and because my attachments were greater than 5GB that instantly disabled my email and triggered bounce-backs (even sending and receiving with no attachments),” the reader told us.

“No one deletes attachments every time an email is received. This is like blackmail. MS is forcing us to buy a subscription by the back door or to have to delete emails with attachments on a regular basis ad infinitum.”

He isn’t the only one perplexed by the issue.

[…]

One who apparently was unaware that it was the attachments shifting over to OneDrive causing the email problems deleted a lot of emails, only to find it didn’t change the “storage used” amount.

[…]

https://www.theregister.com/2023/04/06/microsoft_outlook_onedrive_storage/

Twitter, Facebook, Instagram, YouTube Endure Outages

Did someone actually break the internet? It sorta seems like it. Users of Twitter, Facebook, Instagram, and YouTube, some of the web’s biggest platforms, reported experiencing major issues on Wednesday, with many losing access to basic features and functions.

Reports first poured in concerning Twitter, where users reported being met with a message telling them they’d reached their “Tweet limit” for the day. Twitter actually does have a tweet limit (it’s 2,400 tweets per day), which the platform says it uses to alleviate strain on its backend. However, most people don’t tweet that much, and many of the people who reported receiving the message said they hadn’t even tweeted yet that day.

[…]

Weirdly enough, an almost identical affliction seemed to descend upon Facebook and Instagram Wednesday, with users reporting that they were unable to post new Insta stories or reach Facebook Messenger. Downdetector, which tracks individual complaints for web platforms, showed a spike in incident reports for both platforms around 4:30 p.m. EST, around the same time that Twitter also began having trouble.

To top it all off, some YouTube users reported being unable to reach the platform’s homepage Wednesday.

[…]

 

Source: Twitter, Facebook, Instagram, YouTube Endure Outages

Outlook, Teams, Calendar down for >5 hours

[…] According to outage tracker DownDetector, reports began coming in of users facing a 500 error and being unable to send, receive or search email through Outlook.com from about 4am UTC, peaking at 8 and 9am as Europeans reached their desks.

Microsoft confirmed the outage on its service health website, saying: “We’re applying targeted mitigations to a subset of affected infrastructure and validating that it has mitigated impact. We’re also making traffic optimization efforts to alleviate user impact and expedite recovery.”

It added that extra “Outlook.com functionality such as Calendar APIs consumed by other services such as Microsoft Teams are also affected.”

At the time of writing, the blackout appears to be ongoing. As for what caused it, the Microsoft 365 Status Twitter account said: “We’ve confirmed that a recent change is contributing to the cause of impact. We’re working on potential solutions to restore availability of the service.”

In plain English, Microsoft tweaked something and the house of cards came tumbling down, so they’ll probably have to revert the change. It offered the reference number EX512238 to track in the admin center and otherwise directed users to watch the service health page for any updates.

[…]

Source: Take the morning off because Outlook has already • The Register

This is why cloud solutions aren’t always the best way to go

High-powered lasers can be used to steer lightning strikes

[…]

European researchers have successfully tested a system that uses terawatt-level laser pulses to steer lighting toward a 26-foot rod. It’s not limited by its physical height, and can cover much wider areas — in this case, 590 feet — while penetrating clouds and fog.

The design ionizes nitrogen and oxygen molecules, releasing electrons and creating a plasma that conducts electricity. As the laser fires at a very quick 1,000 pulses per second, it’s considerably more likely to intercept lightning as it forms. In the test, conducted between June and September 2021, lightning followed the beam for nearly 197 feet before hitting the rod.

[…]

The University of Glasgow’s Matteo Clerici, who didn’t work on the project, noted to The Journal that the laser in the experiment costs about $2.17 billion dollars. The discoverers also plan to significantly extend the range, to the point where a 33-foot rod would have an effective coverage of 1,640 feet.

[…]

Source: High-powered lasers can be used to steer lightning strikes | Engadget

Microsoft mistake took down Exchange Online and Teams on 2/12/22

Microsoft’s flagship cloudy productivity services are down across the Asia-Pacific region.

“Our initial investigation indicates that there our service infrastructure is performing at a sub-optimal level, resulting in impact to general service functionality” states an advisory time-stamped 12:41PM on December 2.

The incident means customers of Exchange Online may not be able to access the service, send email and/or files, or use what Microsoft described as “General functionality”.

The impact on Teams means:

  • Users may experience issues scheduling/editing meetings and/or live meetings;
  • People Picker/Search function may not work as expected;
  • Users may be unable to search Microsoft Teams;
  • Users may be unable to load the Assignments tab in Microsoft Teams.

Messaging, chat, channels, and other core Teams services appear to be available.

Microsoft appears not to know what’s wrong.

[…]

Updated at 22:00 UTC, December 2nd The incident has ended! An update to Microsoft’s incident report time-stamped 2314 on December 2 offers the description of the preliminary root cause:

Processing components were not performing within optimal performance thresholds because of a legacy process that required tokens to be processed on specific components. In isolation this process wasn’t problematic, but combined with the large number of requests, this resulted in resource saturation, causing impact across multiple Microsoft 365 apps

Microsoft tested transitioning away from the problematic legacy process and restarting affected infrastructure.

Which worked, so the company did the same thing in its live environment.

The incident ran for nine hours and 59 minutes, from 1355 UTC on December 1st to 0954 UTC on December 2.

[…]

Source: Microsoft mistake took down Exchange Online and Teams • The Register

Scientists zap clouds with electricity to make them rain

A new experiment has shown that zapping clouds with electrical charge can alter droplet sizes in fog or, potentially, help a constipated cloud to rain.

Last year Giles Harrison, from the University of Reading, and colleagues from the University of Bath, spent many early mornings chasing fogs in the Somerset Levels, flying uncrewed aircraft into the gloop and releasing charge. Their findings, published in Geophysical Research Letters, showed that when either positive or negative charge was emitted, the fog formed more water droplets.

“Electric charge can slow evaporation, or even – and this is always amazing to me – cause drops to explode because the electric force on them exceeds the surface tension holding them together,” said Harrison.

The findings could be put to good use in dry regions of the world, such as the Middle East and north Africa, as a means of encouraging clouds to release their rain. Cloud droplets are larger than fog droplets and so more likely to collide, and Harrison and his colleagues believe that adding electrical charge to a cloud could help droplets to stick together and become more weighty.

Source: Scientists zap clouds with electricity to make them rain | Environment | The Guardian

Fitbit accounts are being replaced by Google accounts

New Fitbit users will be required to sign-up with a Google account, from next year, while it also appears one will be needed to access some of the new features in years to come.

Google has been slowly integrating Fitbit into the fold since buying the company back in November 2019. Indeed, the latest products are now known as “Fitbit by Google”. However, as it currently stands, device owners have been able to maintain separate accounts for Google and Fitbit accounts.

Google has now revealed it is bringing Google Accounts to Fitbit in 2023, enabling a single login for both services. From that point on, all new sign ups will be through Google. Fitbit accounts will only be supported until 2025.

From that point on, a Google account will be the only way to go. To aid the transition, once the introduction of Google accounts begins, it’ll be possible to move existing devices over while maintaining all of the recorded data.

[…]

“We’ll be transparent with our customers about the timeline for ending Fitbit accounts through notices within the Fitbit app, by email, and in help articles.”

Whether that will be enough to assuage the concerns of the Fitbit user base – who didn’t have a say on whether Google bought their personal fitness data – remains to be seen.

Source: Fitbit accounts are being replaced by Google accounts | Trusted Reviews

So wonderful cloud – first of all, why should this data go to the cloud anyway? Second, you thought you were giving it to one provider but it turns out you’re giving it to another with no opt-out other than trashing an expensive piece of hardware.

Roombas don’t work if an iRobot server is down

That floor won’t clean itself… well, quite literally it won’t, especially if the vacuum robot you bought to clean the floor won’t hop off its dock when the servers are down

Users started reporting issues with their Roomba app around midday Friday. The status page for iRobot, the maker of Roomba, identified there were outages with Amazon Web Services. The company said they were working with AWS engineers to get the problem sorted out, though as of reporting this, the issue was still unresolved.

Roomba also tweeted about the issue, saying “some customers may be having issues accessing the iRobot app.”

Server outages happen, and that will of course cause issues with apps that rely on connectivity for most of devices more robust features. The problem is when some users cannot access necessary features at all. One user reported they could no longer stop their Roomba from doing its business as child lock features are only accessible in the app.

In response to Gizmodo’s inquiry, iRobot apologized to the customers for the inconvenience and linked to a video and written instructions about how to manually deactivate child and pet locks.

Other users wrote to Gizmodo that although their Roombas can activate manually by hitting the “Clean” button, their devices are still effectively unusable since they cannot tell the vacuum to only do certain rooms or avoid debris in other parts of the house.

This is just another example of the finicky difficulties employed when electronic devices require an internet connection to access necessary functionality.

[…]

Source: Roomba Users Report App Outages

Cloudflare explains hour long outage which broke a lot of internets

The incident began at 0627 UTC (2327 Pacific Time) and it took until 0742 UTC (0042 Pacific) before the company managed to bring all its datacenters back online and verify they were working correctly. During this time a variety of sites and services relying on Cloudflare went dark while engineers frantically worked to undo the damage they had wrought short hours previously.

“The outage,” explained Cloudflare, “was caused by a change that was part of a long-running project to increase resilience in our busiest locations.”

Oh, the irony.

What had happened was a change to the company’s prefix advertisement policies, resulting in the withdrawal of a critical subset of prefixes. Cloudflare makes use of BGP (Border Gateway Protocol). As part of this protocol, operators define which policies (adjacent IP addresses) are advertised to or accepted from networks (or peers).

Changing a policy can result in IP addresses no longer being reachable on the Internet. One would therefore hope that extreme caution would be taken before doing a such a thing…

Cloudflare’s mistakes actually began at 0356 UTC (2056 Pacific), when the change was made at the first location. There was no problem – the location used an older architecture rather than Cloudflare’s new “more flexible and resilient” version, known internally as MCP (Multi-Colo Pop.) MCP differed from what had gone before by adding a layer of routing to create a mesh of connections. The theory went that bits and pieces of the internal network could be disabled for maintenance. Cloudflare has already rolled out MCP to 19 of its datacenters.

Moving forward to 0617 UTC (2317 Pacific) and the change was deployed to one of the company’s busiest locations, but not an MCP-enabled one. Things still seemed OK… However, by 0627 UTC (2327 Pacific), the change hit the MCP-enabled locations, rattled through the mesh layer and… took

Five minutes later the company declared a major incident. Within half an hour the root cause had been found and engineers began to revert the change. Slightly worryingly, it took until 0742 UTC (0042 Pacific) before everything was complete. “This was delayed as network engineers walked over each other’s changes, reverting the previous reverts, causing the problem to re-appear sporadically.”

One can imagine the panic at Cloudflare towers, although we cannot imagine a controlled process that resulted in a scenario where “network engineers walked over each other’s changes.”

We’ve asked the company to clarify how this happened, and what testing was done before the configuration change was made, and will update should we receive a response.

Mark Boost CEO of Cloud native outfit Civo (formerly of LCN.com) was scathing regarding the outage: “This morning was a wake-up call for the price we pay for over-reliance on big cloud providers. It is completely unsustainable for an outage with one provider being able to bring vast swathes of the internet offline.

“Users today rely on constant connectivity to access the online services that are part of the fabric of all our lives, making outages hugely damaging…

“We should remember that scale is no guarantee of uptime. Large cloud providers have to manage a vast degree of complexity and moving parts, significantly increasing the risk of an outage.”

Source: Cloudflare explains today’s mega-outage • The Register