Firefox to block Backspace key from working as “Back” button

Mozilla developers plan to remove support for using the Backspace key as a Back button inside Firefox.The change is currently active in the Firefox Nightly version and is expected to go live in Firefox 86, scheduled to be released next month, in late February 2021.ZDNet RecommendsThe best free video streaming servicesThe best free video streaming servicesIs money tight? Have you binge-watched everything on Netflix that you ever wanted to see? Here are ways to find new-to-you, great movies plus TV shows for free.Read MoreThe removal of the Backspace key as a navigational element didn’t come out of the blue. It was first proposed back in July 2014, in a bug report opened on Mozilla’s bug tracker.At the time, Mozilla engineers argued that many users who press the Backspace key don’t always mean to navigate to the previous page (the equivalent of pressing the Back button).”Pressing backspace does different things depending on where the cursor is. If it’s in a text input field, it deletes the character to the left. If it’s not in a text input field, it’s the same as hitting the back button,” said Blair McBride, a senior software engineer for Mozilla at the time.”Whether to keep this behaviour has been argued For A Very Long Time,” McBride said. “It’s confusing for many people, but we’ve assumed it would break muscle memory for many people.”Back in 2014, McBride asked other Mozilla engineers to gather data and see exactly how many people press this key before taking a decision.Subsequent data showed that the Backspace key is, by far, the most pressed keyboard shortcut inside the Firefox user interface, with 40 million monthly active users pressing the key and triggering a “Back” navigation.To put it in perspective, this was well above the 16 million Firefox users pressing the CTRL+F shortcut to search content inside a page and 15 million Firefox users who pressed the page reload shortcuts (F5 and CTRL+R).

Source: Firefox to block Backspace key from working as “Back” button | ZDNet

Buggy chkdsk in Windows update that caused boot failures and damaged file systems has been fixed

A Windows 10 update rolled out by Microsoft contained a buggy version of chkdsk that damaged the file system on some PCs and made Windows fail to boot.

The updates that included the fault are KB4586853 and KB4592438. Microsoft’s notes on these updates now incorporate a warning: “A small number of devices that have installed this update have reported that when running chkdsk /f, their file system might get damaged and the device might not boot.”

The notes further reveal: “This issue is resolved and should now be prevented automatically on non-managed devices,” meaning PCs that are not enterprise-managed. On managed PCs Microsoft recommended a group policy setting that rolls back the faulty update. If there are devices that have already hit the issue, Microsoft has listed troubleshooting steps which it says should fix the problem.

The chkdsk utility itself is not listed in the files that are patched by these updates, suggesting that the problem is with other system files called by chkdsk.

[…]

Source: Buggy chkdsk in Windows update that caused boot failures and damaged file systems has been fixed • The Register

MATRIC – control your PC from phone using button templates

KEYBOARD EMULATION

Low level keyboard emulation, works in most apps and games

KEYBOARD MACROS

Record multiple keyboard actions into precisely timed macros

STREAM DECK

MATRIC supports OBS Studio from simple scene switching to full blown studio mode mix console

DECK EDITOR

Create your own decks by using intuitive drag&drop editor

PHOTO CAPTURE

Snap a photo on the smartphone and MATRIC can send it to PC clipboard

BARCODE SCANNER

Scan barcode or QR code using the smartphone and MATRIC will type it to your PC

TOUCHPAD

Uses smartphone screen as multi touch touchpad for PC

VIRTUAL JOYSTICK

Use MATRIC as virtual joystick with full support for buttons and axes

AUDIO PLAYER

Play an audio file on PC

Source: MATRIC

X.Org is now pretty much an ex-org: Maintainer declares the open-source windowing system largely abandoned

Red Hat’s Adam Jackson, project owner for the X.Org graphical and windowing system still widely used on Linux, said the project has been abandoned “to the extent that that means using it to actually control the display, and not just keep X apps running.”

Jackson’s post confirms suspicions raised a week ago by Intel engineer Daniel Vetter, who said in a discussion about enabling a new feature: “The main worry I have is that xserver is abandonware without even regular releases from the main branch. That’s why we had to blacklist X. Without someone caring I think there’s just largely downsides to enabling features.”

This was picked up by Linux watcher Michael Larabel, who noted that “the last major release of the X.Org server was in May 2018… don’t expect the long-awaited X.Org Server 1.21 to actually be released anytime soon.”

The project is not technically abandoned – the last code merge was mere hours ago at the time of writing – and Jackson observed in a comment on his post that “with my red hat on, I’m already on the hook for supporting the xfree86 code until RHEL8 goes EOL anyway, so I’m probably going to be writing and reviewing bugfixes there no matter what I do.”

[…]

Jackson said the future of X server is as “an application compatibility layer”, though he also said that having been maintaining X “for nearly the whole of [his] professional career” he is “completely burnt out on that on its own merits, let alone doing that and also being release manager and reviewer of last resort.”

He also mentioned related projects that he says are worthwhile such as Xwayland (X clients under Wayland), XWin (X Server on Cygwin, a Unix-like environment on Windows), and Xvnc (X applications via a remote VNC viewer).

When a response to Jackson’s post complained about issues with Wayland – such as lack of stability, poor compatibility with Nvidia hardware, lack of extension APIs – the maintainer said that keeping X server going was part of the problem. “I’m of the opinion that keeping xfree86 alive as a viable alternative since Wayland started getting real traction in 2010ish is part of the reason those are still issues, time and effort that could have gone into Wayland has been diverted into xfree86,” he said.

The hope then is that publicly announcing the end of the reliable but ancient X.Org server will stimulate greater investment in Wayland, using Xwayland for the huge legacy of existing X11 applications.

 

Source: X.Org is now pretty much an ex-org: Maintainer declares the open-source windowing system largely abandoned • The Register

Twitter: All tweets, notifications vanish

Updated Twitter is right now suffering a baffling outage in that the website is still up, you can still log in, the apps will run.

But there are, seemingly, no tweets nor notifications. At all. All gone. All that anger and snark, and information and misinformation, wiped off the face of the planet, just like that.

Visiting your timeline or profile shows simply the message, “Something went wrong.” It’s otherwise empty. And earlier, people’s notifications pages went blank, suggesting really, truly no one on Earth cares about your twitterings. “Nothing to see here,” it states.

Reassuringly, you’re not alone in your blank internet universe: Downdetector reports a surge of complaints that Twitter isn’t working properly, with the outage kicking off around 1430 PT (2130 UTC).

As your vulture types this, it appears some people can see their tweets, but cannot tweet. And some of us can’t see anything. The Twitter status page reports the team is “investigating irregularity” with the platform’s APIs.

Screenshot of a failed tweet

What one of our vultures saw as they tried to tweet or see other people’s tweets

This IT breakdown comes within hours of American financial regulators demanding Twitter be subject to harsher rules following the July hacks of prominent users’ accounts – and soon after CEO Jack Dorsey furiously backpedaled after his website censored a problematic article from a US newspaper.

A Supreme Court Justice this week also mused that the likes of Twitter have gained sweeping immunity from the legal consequences of their users’ content and actions, and that imbalance ought to be righted. ®

Updated to add at 2220 UTC

People’s tweets are showing up again in timelines and profiles, though no one can send any new tweets nor view those that were able to be sent, if any, during the past hour or so. Notifications are also still AWOL.

Updated to add at 2300 UTC

And Twitter now appears to be back to normal, or rather, Twitter’s idea of normal.

Source: If you can see this headline, you’re certainly not reading it on Twitter: All tweets, notifications vanish • The Register

Ring glitch results in global ding dong ditch: Doorbells keep going off with no one pushing them.

Amazon-owned smart home appliance maker Ring has won the world record for biggest game of “ding dong ditch” after a software glitch broadcast erroneous doorbell chimes to countless users yesterday.

The global game of Ring and run (as it’s known in the US) coincided with software issues that prevented owners from viewing archived footage or receiving push notifications. Customers in markets including the UK and US were believed to be affected.

The Timely Information Transmission Suffered Unpredictable Ping-time (TITSUP) led some to believe that Ring’s systems were being targeted deliberately by a malicious third party. “Are the Ring doorbells being hacked? Mine are going off non-stop,” tweeted one confused punter.

“You’re [sic] network has been down for hours. Now I am getting phantom ‘rings’ and it’s driving my Great Dane crazy,” complained another.

Your humble hack also experienced the glitch when a random chime from his overpriced doorbell disturbed a post-work nap. More accurately, it startled his dogs, who then leapt onto his chest.

Speaking to El Reg, Ring’s Europe head of communications, Claudia Fellerman, confirmed the problem and said it has since been fixed.

“Our processing infrastructure was running behind which caused some delays in receiving in-app notifications and Chime motion and ding notifications. However, this has been resolved,” she said.

According to Ring’s status page, no user data was lost, and a fix was applied by late evening. The company warned that users may encounter delayed chimes and notifications while the back-end catches up.

Ring also urged punters to check the battery levels on their devices as the outage may have caused a higher-than-usual power drain.

Source: Ring glitch results in global ding dong ditch: Doorbell bling flings out random pings but they’re not the real thing • The Register

Yay cloud!

Microsoft Exchange Online goes down – again

Microsoft’s Exchange Online service fell over in the early hours of this morning.The company’s status orifice initially figured that the problem mainly affected users in India as its engineers noted the wobbling at around 0700 BST. Just under an hour later Microsoft had to admit it was another global outage.It is the latest in what appears to be a battle of who can annoy their users more. Azure suffered a major outage earlier this week. Rival Apple then hit back with its own wobble before Microsoft continued the TITSUP* tit-for-tat this morning.The mystery issue afflicted apps using Exchange Online protocols, including Outlook on the desktop, mobile devices, and “those dependant on REST functionality,” Microsoft said. The company was taking a long hard look at what it might have changed in recent days that might have broken something.

[…]

Microsoft eventually pinned the blame on a “recent configuration update”, rolled it back and, at time of writing, was “monitoring the service” for signs of life.

[…]

Users reported problems sending and receiving mails, accessing folders and attachments, or even being able to log into their services. Some noted difficulty synchronising between Azure Active Directory and Exchange Online while there were also isolated reports of SharePoint and Teams struck by the curse of bork.

[…]

Source: Where are we now? Microsoft 363? 362? We’ve lost count because Exchange Online isn’t playing nicely this morning • The Register

Yay cloud!

Microsoft Outlook, Office 365, Teams, and other services suffer ~6 hour outage

Some Microsoft services, including Outlook, Office 365, and Microsoft Teams, experienced a multi-hour outage on Monday, but the issues have been resolved, according to the company.

“We’ve confirmed that the residual issue has been addressed and the incident has been resolved,” Microsoft tweeted at 12AM ET on Tuesday. “Any users still experiencing impact should be mitigated shortly.”

The company first acknowledged issues at 5:44PM ET via the Microsoft 365 Status Twitter account, and said it had rolled back a change thought to be the cause of the issue at 6:36PM ET. But just 13 minutes later, the company tweeted again to say that it was “not observing an increase in successful connections after rolling back a recent change.” Microsoft tweeted that services were mostly back at 10:20PM ET.

Microsoft’s Azure Active Directory service was also experiencing issues on Monday, but the company said those were “now mitigated” as of 11:21PM ET Monday night. Microsoft said the problems were caused by a configuration change to a backend storage layer, which the company rolled back.

Update, September 29th, 11:20AM ET: Updated to confirm Microsoft has resolved the issues. The headline has also been updated to reflect this fact.

Source: Microsoft Outlook, Office 365, Teams, and other services are back following outage – The Verge

Yay cloud!

[…]

The core service affected was Azure Active Directory, which controls login to everything from Outlook email to Teams to the Azure portal, used for managing other cloud services. The five-hour impact was also felt in productivity-stopping annoyances like some installations of Microsoft Office and Visual Studio, even on the desktop, declaring that they could not check their licensing and therefore would not run.

There are claims that the US emergency 911 service was affected, which is not implausible given that the RapidDeploy Nimbus Dispatch system describes itself as “a Microsoft Azure–based Computer Aided Dispatch platform”. If the problem is authentication, even resilient services with failover to other Azure regions may become inaccessible and therefore useless.

The company has yet to provide full details, but a status report today said that “a recent configuration change impacted a backend storage layer, which caused latency to authentication requests”.

[…]

Microsoft seems to have more than its fair share of problems. Gartner noted recently that it “continues to have concerns related to the overall architecture and implementation of Azure, despite resilience-focused efforts and improved service availability metrics during the past year”. The analyst’s reservations were based in part on the low ratio of availability zones to regions, and that “a limited set of services support the availability zone model”.

Gartner’s concerns are valid, but this was not the cause of the recent disruption. Bill Witten, identity architect at Okta, was to the point, commenting: “So, does everyone get why the mono-directory is not a good idea?”

Microsoft has built so much on Azure Active Directory that it is a single point of failure. The company either needs to make it so resilient that failure is near-impossible (which is likely to be its intention), or consider gradually reducing the dependence of so many services.

Source:

With so many cloud services dependent on it, Azure Active Directory has become a single point of failure for Microsoft

A New Version of Microsoft Office Without a Subscription Launches in 2021

Subscriptions may be ideal for certain services such as Netflix, with its constant flow of new content, but for a suite of tools like Microsoft Office? Paying every month doesn’t suit everyone, especially if all they want is access to the word processor and spreadsheet. Thankfully, a new perpetual license edition of the suite arrives next year.

Microsoft clearly pushes an Office subscription as the best way to access its always up-to-date suite of tools and services, while those who just want to buy a copy outright and use it for years to come are still using Office 2019, which released back in 2018. It was unclear whether 2019 would ever be replaced, but as spotted by Windows Central, Microsoft quietly confirmed in a news post by the Exchange team that “Microsoft Office will also see a new perpetual release for both Windows and Mac, in the second half of 2021.”

There’s no details regarding the name, price, or availability of this new version

Source: A New Version of Microsoft Office Without a Subscription Launches in 2021 | PCMag

It’s Not Just You, a Ton of Google Services Just Went Down between 2100 – 2230

If you’ve been experiencing issues trying to access Google or YouTube, you’re not alone. Around 9 p.m. ET on Thursday evening, tons of users worldwide reported problems with Google and the many services under the tech giant’s umbrella, including Google Drive, Gmail, Stadia, the Play Store, and even Nest.

Some, such as Gmail, were taking significantly more time to load while other services like Google’s Play Store and Calendar seemed to be on an endless boot-up loop and wouldn’t load at all. DownDetector currently shows outages for just about all of Google’s services in areas all over the world. According to the site, the bulk of reports are coming from Australia, the U.S., and east Asia, with users primarily having issues logging in.

We’ve reached out to Google for more info. Honestly, a worldwide Google outage is absolutely on-brand for the year we’re having so far, so I’m hardly surprised.

Update: 9/24/2020; 11:17 p.m. ET: Luckily, the problem appears to have been short-lived. An update from Google’s Cloud status dashboard showed that the issue across had been resolved “for most traffic” across Google’s services shortly after 10:30 p.m. ET.

Source: It’s Not Just You, a Ton of Google Services Just Went Down (Update: Phew, They’re Back Up)

Yay, cloud

Microservices guru says think serverless, not Kubernetes: You don’t want to manage ‘a towering edifice of stuff’

Sam Newman, a consultant and author specialising in microservices, told a virtual crowd at dev conference GOTOpia Europe that serverless, not Kubernetes, is the best abstraction for deploying software.

Newman is an advocate for cloud. “We are so much in love with the idea of owning our own stuff,” he told attendees. “We end up in thrall to these infrastructure systems we build for ourselves.”

He is therefore a sceptic when it comes to private cloud. “AWS showed us the power of virtualization and the benefits of automation via APIs,” he said. Then came OpenStack, which sought to bring those same qualities on-premises. It is one of the biggest open-source projects in the world, he said, but a “false hope… you still fundamentally have to deal with what you are running.”

cncf_interactive_landsacpe

You are viewing 1,459 cards with a total of 2,407,911 stars, market cap of $19.73 trillion and funding of $65.62 billion (click to enlarge): The CNCF ‘landscape’ illustration of cloud native shows how complex Kubernetes and its ecosystem has become

What is the next big thing? Kubernetes? “Kubernetes is great if you want to manage container workloads,” said Newman. “It’s not the best thing for managing container workloads. It’s great for having a fantastic ecosystem around it.”

As he continued, it turned out he has reservations. “It’s like a giant thing with lots of little things inside it, all these pods, like a termite mound. It’s a big giant edifice, your Kubernetes cluster, full of other moving parts… a lot of organisations incrementing their own Kubernetes clusters have found their ability to deliver software hollowed out by the fact that everybody now has to go on Kubernetes training courses.”

Newman illustrates his point with a reference to the CNCF (Cloud Native Computing Foundation) diagram of the “cloud native landscape”, which looks full of complexity.

A lot of organisations incrementing their own Kubernetes clusters have found their ability to deliver software hollowed out by the fact that everybody now has to go on Kubernetes training courses

Kubernetes on private cloud is “a towering edifice of stuff,” he said. Hardware, operating system, virtualization layer, operating system inside VMs, container management, and on top of that “you finally get to your application… you spend your time and money looking after those things. Should you be doing any of that?”

Going to public cloud and using either managed VMs, or a managed Kubernetes service like EKS (Amazon), AKS (Azure) or GKE (Google), or other ways of running containers, takes away much of that burden; but Newman argued that it is serverless, rather than Kubernetes, that “changes how we think about software… you give your code to the platform and it works out how to execute it on your behalf,” he said.

What is serverless?

“The key characteristics of a serverless offering is no server management. I’m not worried about the operating systems or how much memory these things have got; I am abstracted away from all of that. They should autoscale based on use… implicitly I’m presuming that high availability is delivered by the serverless product. If we are using public cloud, we’d also be expecting a pay as you go model.”

“Many people erroneously conflate serverless and functions,” said Newman, since the term is associated with services like AWS Lambda and Azure Functions. Serverless “has been around longer than we think,” he added, referencing things like AWS Simple Storage Service (S3) in 2006, as well as things like messaging solutions and database managers such as AWS DynamoDB and Azure Cosmos DB.

But he conceded that serverless has restrictions. With functions as a service (FaaS), there are limits to what programming languages developers can use and what version, especially in Google’s Cloud Functions, which has “very few languages supported”.

Functions are inherently stateless, which impacts the programming model – though Microsoft has been working on durable functions. Another issue is that troubleshooting can be harder because the developer is further removed from the low level of what happens at runtime.

“FaaS is the best abstraction we have come up with for how we develop software, how we deploy software, since we had Heroku,” said Newman. “Kubernetes is not developer-friendly.”

FaaS, said Newman, is “going to be the future for most of us. The question is whether or not it’s the present. Some of the current implementations do suck. The usability of stuff like Lambda is way worse than it should be.”

Despite the head start AWS had with Lambda, Newman said that Microsoft is catching up with serverless on Azure. He is more wary of Google, arguing that it is too dependent on Knative and Istio for delivering serverless, neither of which in his view are yet mature. He also thinks that Google’s decision not to develop Knative inside the CNCF is a mistake and will hold it back from adapting to the needs of developers.

How does serverless link with Newman’s speciality, microservices? Newman suggested getting started with a 1-1 mapping, taking existing microservices and running them as functions. “People go too far too fast,” he said. “They think, it makes it really easy for me to run functions, let’s have a thousand of them. That way lies trouble.”

Further breaking down a microservice into separate functions might make sense, he said, but you can “hide that detail from the outside world… you might change your mind. You might decide to merge those functions back together again, or strip them further apart.”

The microservice should be a logical unit, he said, and FaaS an implementation detail.

Despite being an advocate of public cloud, Newman recognises non-technical concerns. “More power is being concentrated in a small number of hands,” he said. “Those are socio-economic concerns that we can have conversations about.”

Among all the Kubernetes hype, has serverless received too little attention? If you believe Newman, this is the case. The twist, perhaps, is that some serverless platforms actually run on Kubernetes, explicitly so in the case of Google’s platform

Source: Microservices guru says think serverless, not Kubernetes: You don’t want to manage ‘a towering edifice of stuff’ • The Register

No, Kubernetes doesn’t make applications portable, say analysts. Good luck avoiding lock-in, too

Do not make application portability your primary driver for adopting Kubernetes, say Gartner analysts Marco Meinardi, Richard Watson and Alan Waite, because while the tool theoretically improves portability in practice it also locks you in while potentially denying you access to the best bits of the cloud.

The three advance that theory in a recent “Technical Professional Advice” document that was last week summarised in a blog post.

The Register has accessed the full document and its central idea is that adopting Kubernetes can’t be done without also adopting a vendor of your preferred Kubernetes management tools.

“By using Kubernetes, you simply swap one form of lock-in for another, specifically for one that can lower switching cost should the need arise,” the trio write. “Using Kubernetes to minimize provider lock-in is an attractive idea, but such abstraction layer simply becomes an alternative point of lock-in. Instead of being locked into the underlying infrastructure environment, you are now locked into the abstraction layer.”

“If you adopt Kubernetes only to enable application portability, then you are trying to solve one problem, by taking on three new problems you didn’t already have.”

And that matters because “Although abstraction layers may be attractive for portability, they do not surface completely identical functionality from the underlying services — they often mask or distort them. In general, the use of abstraction layers on top of public cloud services is hardly justified when organizations prioritize time to value and time to market due to their overhead and service incongruence.”

The trio also worry that shooting for portability can cut users off from the best bits of the cloud.

“Implementing portability with Kubernetes also requires avoiding any dependency that ties the application to the infrastructure provider, such as the use of cloud provider’s native services. Often, these services provide the capabilities that drove us to the cloud in the first place,” they write.

And then there’s the infrastructure used to run Kubernetes, which the three point out will have variable qualities that make easy portability less likely.

“The more specific to a provider a compute instance is, the less likely it is to be portable in any way,” the analysts wrote. “For example, using EKS on [AWS] Fargate is not CNCF-certified and arguably not even standard Kubernetes. The same is true for virtual nodes on Azure as implemented by ACIs.”

The document also points out that adopting Kubernetes will almost certainly mean acquiring third-party storage and networking tools, which means more elements that have to be reproduced to make applications portable and therefore more lock-in.

Source: No, Kubernetes doesn’t make applications portable, say analysts. Good luck avoiding lock-in, too • The Register

Academic Study Says Open Source Has Peaked: But Why?

Open source runs the world. That’s for supercomputers, where Linux powers all of the top 500 machines in the world, for smartphones, where Android has a global market share of around 75%, and for everything in between, as Wired points out:

When you stream the latest Netflix show, you fire up servers on Amazon Web Services, most of which run on Linux. When an F-16 fighter takes off, three Kubernetes clusters run to keep the jet’s software running. When you visit a website, any website, chances are it’s run on Node.js. These foundational technologies — Linux, Kubernetes, Node.js — and many others that silently permeate our lives have one thing in common: open source.

Ubiquity can engender complacency: because open source is indispensable for modern digital life, the assumption is that it will always be there, always supported, always developed. That makes new research looking at what the longer-term trends are in open source development welcome. It builds on work carried out by three earlier studies, in 2003, 2007 and 2007, but using a much larger data set:

This study replicates the measurements of project-specific quantities suggested by the three prior studies (lines of code, lifecycle state), but also reproduce the measurements by new measurands (contributors, commits) on an enlarged and updated data set of 180 million commits contributed to more than 224,000 open source projects in the last 25 years. In this way, we evaluate existing growth models and help to mature the knowledge of open source by addressing both internal and external validity.

The new work uses data from Open Hub, which enables the researchers to collect commit information across different source code hosts like GitHub, Gitlab, BitBucket, and SourceForge. Some impressive figures emerge. For example, at the end of 2018, open source projects contained 17,586,490,655 lines of code, made up of 14,588,351,457 lines of source code and 2,998,139,198 lines of comments. In the last 25 years, 224,342 open source projects received 180,937,525 commits in total. Not bad for what began as a ragtag bunch of coders sharing stuff for fun. But there are also some more troubling results. The researchers found that most open source projects are inactive, and that most inactive projects never receive a contribution again.

Looking at the longer-term trends, an initial, transient exponential growth was found until 2009 for commits and contributors, until 2011 for the number of available projects, and until 2013 for available lines of code. Thereafter, all those metrics reached a plateau, or declined. In one sense, that’s hardly a surprise. In the real world, exponential growth has to stop at some point. The real question is whether open source has peaked purely because it has reached its natural limits, or whether they are other problems that could have been avoided.

For example, a widespread concern in the open source community is that companies may have deployed free code in their products with great enthusiasm, but they have worried less about giving back and supporting all the people who write it. Such an approach may work in the short term, but ultimately destroys the software commons they depend on. That’s just as foolish as over-exploiting the environmental commons with no thought for long-term sustainability. As the Wired article mentioned above points out, it’s not just bad for companies and the digital ecosystem, it’s bad for the US too. In the context of the current trade war with China, “the core values of open source — transparency, openness, and collaboration — play to America’s strengths”. The new research might be an indication that the open source community, which has selflessly given so much for decades, is showing signs of altruism fatigue. Now would be a good time for companies to start giving back by supporting open source projects to a much greater degree than they have so far.

Source: Academic Study Says Open Source Has Peaked: But Why? | Techdirt

I spoke of this in 2017

Putting the d’oh! in Adobe: ‘Years of photos’ permanently wiped from iPhones, iPads by bad Lightroom app update

Adobe is offering its condolences to customers after an update to its Lightroom photo manager permanently deleted troves of snaps on people’s iPhones, iPads, and iPod Touches.

First reported by PetaPixel, the data annihilation was triggered after punters this week fetched version 5.4 of the iOS software. Netizens complained that, following the release and installation of that build, their stored photos and paid-for presets vanished. Adobe acknowledged the issue though it didn’t have much to offer punters besides saying sorry.

“Yesterday when I use the Lightroom Mobile, it was okay,” reported customer Mohamad Alif Eqnur.

“I still have my presets and pictures saved in the apps but today, 18th August 2020, after I updated the apps on Apps Store, all of my pictures and presets gone.”

The photo-nuking bug has apparently been fixed, and updating to the latest version of the iOS app will keep you from losing your stuff, if it hasn’t been lost already. Assets saved to the Lightroom cloud are still intact as are those on non-iOS devices.

If you had copied your photos on your Mac, PC, or Android gear, the pics will still be there. Basically, if you backed up your snaps from your iThing, you’re OK. If you left it all on your iPhone or iPad… sorry, friend.

Source: Putting the d’oh! in Adobe: ‘Years of photos’ permanently wiped from iPhones, iPads by bad Lightroom app update • The Register

GitHub starts week with 4 whole hours of downtime

GitHub marked the start of the week with more than four hours of downtime, as GitHub Issues, Actions, Pages, Packages and API requests all reported “degraded performance.”

A problem on the world’s most popular code repository and developer collaboration site was first reported around 05:00 UK time (04:00 UTC) this morning and was resolved at 09:30 UK time (08:30 UTC). Basic Git operations were not affected.

GitHub, on the whole, is a relatively reliable site but the impact of downtime is considerable because of its wide use and critical importance. The site has over 44 million users and over 100 million repositories (nearly 34 million of which are public).

The last major outage before today was on 29th June, and before that on 19 June, and 22nd and 23rd May. In the context of such a key service, that isn’t a great recent track record. “You are a dependency to our systems and if this keeps happening, many will say goodbye,” said developer Emad Mokhtar on Twitter.

[…]

GitHub reported on what went wrong in May and June. It turns out that database issues are the most common problem. On May 5, “a shared database table’s auto-incrementing ID column exceeded the size that can be represented by the MySQL Integer type,” said GitHub’s SVP of engineering, Keith Ballinger.

May 22 was another bad day for the company’s MySQL servers. A primary MySQL instance was failed over for planned maintenance, but the newly promoted instance crashed after six seconds. “We manually redirected traffic back to the original primary,” said Ballinger. Recovering the six seconds of writes to the crashed instance, though, caused delays. “A restore of replicas from the new primary was initiated which took approximately four hours with a further hour for cluster reconfiguration to re-enable full read capacity,” he added.

Source: GitHub is just like all of us: The week has just started but it needed 4 whole hours of downtime • The Register

Lenovo certifies all desktop and mobile workstations for Linux – and will even upstream driver updates

Lenovo has decided to certify all of its workstations for Linux.

“Our entire portfolio of ThinkStation and ThinkPad P Series workstations will now be certified via both Red Hat Enterprise Linux and Ubuntu LTS – a long-term, enterprise-stability variant of the popular Ubuntu Linux distribution,” said a Tuesday statement from GM and executive director of the company’s workstation and client AI group Rob Herman.

Lenovo is serious about this: the company says its workstations will “offer full end-to-end support – from security patches and updates to better secure and verify hardware drivers, firmware and bios optimizations.” Lenovo will also upstream device drivers into the Linux kernel.

The company’s rationale for the move is that Linux workstations are favourites of a sizable population of power users, especially developers and data scientists. Lenovo wants to relieve their employers of the chore of installing and maintaining Linux on the mildly-exotic hardware such users require. But it’s also tipped a hat to Linux enthusiasts with “a pilot program with a preloaded Fedora image on our ThinkPad P53 and P1 Gen 2 systems; providing the latest pure open source platform for this community-based distribution.” Note, however, that the new arrangements are only for Lenovo workstations. ThinkPads, Yogas and other models will still almost certainly run Linux, but don’t get extra love from Lenovo.

Lenovo’s offering isn’t unique: Dell offers supported RHEL and Ubuntu on its XPS13 and Precision mobile workstations, plus the Precision tower workstations. HP Inc also supports Linux on its Z-series mobile and desktop workstations and claims it was first to do so. Lenovo seems to think it might have them outflanked by supporting all possible configurations of its P-series laptops (The Register counts nine machines in that range) and the seven P-series workstations.

Source: Lenovo certifies all desktop and mobile workstations for Linux – and will even upstream driver updates • The Register

Software Development Environments Move to the Cloud

If you’re a newly hired software engineer, setting up your development environment can be tedious. If you’re lucky, your company will have a documented, step-by-step process to follow. But this still doesn’t guarantee you’ll be up and running in no time. When you’re tasked with updating your environment, you’ll go through the same time-consuming process. With different platforms, tools, versions, and dependencies to grapple with, you’ll likely encounter bumps along the way.

Austin-based startup Coder aims to ease this process by bringing development environments to the cloud. “We grew up in a time where [Microsoft] Word documents changed to Google Docs. We were curious why this wasn’t happening for software engineers,” says John A. Entwistle, who founded Coder along with Ammar Bandukwala and Kyle Carberry in 2017. “We thought that if you could move the development environment to the cloud, there would be all sorts of cool workflow benefits.”

With Coder, software engineers access a preconfigured development environment on a browser using any device, instead of launching an integrated development environment installed on their computers. This convenience allows developers to learn a new code base more quickly and start writing code right away.

[…]

Yet cloud-based platforms have their limitations, the most crucial of which is they require reliable Internet service. “We have support for intermittent connections, so if you lose connection for a few seconds, you don’t lose everything. But you do need access to the Internet,” says Entwistle. There’s also the task of setting up and configuring your team’s development environment before getting started on Coder, but once that’s done, you can share your predefined environment with the team.

To ensure security, all source code and related development activities are hosted on a company’s infrastructure—Coder doesn’t host any data. Organizations can deploy Coder on their private servers or on cloud computing platforms such as Amazon Web Services or Google Cloud Platform. This option could be advantageous for banks, defense organizations, and other companies handling sensitive data. In fact, one of Coder’s customers is the U.S. Air Force, and the startup closed a US $30 million Series B funding round last month (bringing its total funding to $43 million), with In-Q-Tel, a venture capital firm with ties to the U.S. Central Intelligence Agency, as one of its backers.

Source: Software Development Environments Move to the Cloud – IEEE Spectrum

Linux not Windows: Why Munich is shifting back from Microsoft to open source – again

In a notable U-turn for the city, newly elected politicians in Munich have decided that its administration needs to use open-source software, instead of proprietary products like Microsoft Office.

“Where it is technologically and financially possible, the city will put emphasis on open standards and free open-source licensed software,” a new coalition agreement negotiated between the recently elected Green party and the Social Democrats says.

The agreement was finalized Sunday and the parties will be in power until 2026. “We will adhere to the principle of ‘public money, public code’. That means that as long as there is no confidential or personal data involved, the source code of the city’s software will also be made public,” the agreement states.

The decision is being hailed as a victory by advocates of free software, who see this as a better option economically, politically, and in terms of administrative transparency.

However, the decision by the new coalition administration in Germany’s third largest and one of its wealthiest cities is just the latest twist in a saga that began over 15 years ago in 2003, spurred by Microsoft’s plans to end support for Windows NT 4.0.

Because the city needed to find a replacement for aging Microsoft Windows workstations, Munich eventually began the move away from proprietary software at the end of 2006.

At the time, the migration was seen as an ambitious, pioneering project for open software in Europe. It involved open-standard formats, vendor-neutral software and the creation of a unique desktop infrastructure based on Linux code named ‘LiMux’ – a combination of Linux and Munich.

By 2013, 80% of desktops in the city’s administration were meant to be running LiMux software. In reality, the council continued to run the two systems – Microsoft and LiMux – side by side for several years to deal with compatibility issues.

As the result of a change in the city’s government, a controversial decision was made in 2017 to leave LiMux and move back to Microsoft by 2020. At the time, critics of the decision blamed the mayor and deputy mayor and cast a suspicious eye on the US software giant’s decision to move its headquarters to Munich.

In interviews, a former Munich mayor, under whose administration the LiMux program began, has been candid about the efforts Microsoft went to to retain their contract with the city.

The migration back to Microsoft and to other proprietary software makers like Oracle and SAP, costing an estimated €86.1m ($93.1m), is still in progress today.

“We’re very happy that they’re taking on the points in the ‘Public Money, Public Code’ campaign we started two and a half years ago,” Alex Sander, EU public policy manager at the Berlin-based Free Software Foundation Europe, tells ZDNet. But it’s also important to note that this is just a statement in a coalition agreement outlining future plans, he says.

“Nothing will change from one day to the next, and we wouldn’t expect it to,” Sander continued, noting that the city would also be waiting for ongoing software contracts to expire. “But the next time there is a new contract, we believe it should involve free software.”

Any such step-by-step transition can be expected to take years. But it is also possible that Munich will be able to move faster than most because they are not starting from zero, Sander noted. It can be assumed that some LiMux software is still in use and that some of the staff there would have used it before.

[…]

Source: Linux not Windows: Why Munich is shifting back from Microsoft to open source – again | ZDNet

Command and Conquer Tiberium Dawn and Red Alert Source code Released by EA

Remaster Update and Open Source / Mod Support
byu/EA_Jimtern incommandandconquer

Today we are proud to announce that alongside the launch of the Remastered Collection, Electronic Arts will be releasing the TiberianDawn.dll and RedAlert.dll and their corresponding source code under the GPL version 3.0 license. This is a key moment for Electronic Arts, the C&C community, and the gaming industry, as we believe this will be one of the first major RTS franchises to open source their source code under the GPL. It’s worth noting this initiative is the direct result of a collaboration between some of the community council members and our teams at EA. After discussing with the council members, we made the decision to go with the GPL license to ensure compatibility with projects like CnCNet and Open RA. Our goal was to deliver the source code in a way that would be truly beneficial for the community, and we hope this will enable amazing community projects for years to come.

So, what does it mean for Mod Support within the Remastered Collection? Along with the inclusion of a new Map Editor, these open-source DLLs should assist users to design maps, create custom units, replace art, alter gameplay logic, and edit data. The community council has already been playing with the source code and are posting some fun experiments in our Discord channel. But to showcase a tangible example of what you can do with the software, Petroglyph has actually created a new modded unit to play with. So we asked a fun question – “What would the Brotherhood of Nod do if they captured the Mammoth Tank?” Well, one guess is they’d replace the turret with a giant artillery cannon and have it fire tactical nukes! Thus the Nuke Tank was born. This is a unit which is fully playable in the game via a mod (seen in the screenshot above), and we hope to have it ready to play and serve as a learning example when the game launches.

Oil Crash Busted Broker’s Computers and Inflicted Big Losses

Syed Shah usually buys and sells stocks and currencies through his Interactive Brokers account, but he couldn’t resist trying his hand at some oil trading on April 20, the day prices plunged below zero for the first time ever. The day trader, working from his house in a Toronto suburb, figured he couldn’t lose as he spent $2,400 snapping up crude at $3.30 a barrel, and then 50 cents. Then came what looked like the deal of a lifetime: buying 212 futures contracts on West Texas Intermediate for an astonishing penny each.

What he didn’t know was oil’s first trip into negative pricing had broken Interactive Brokers Group Inc. Its software couldn’t cope with that pesky minus sign, even though it was always technically possible — though this was an outlandish idea before the pandemic — for the crude market to go upside down. Crude was actually around negative $3.70 a barrel when Shah’s screen had it at 1 cent. Interactive Brokers never displayed a subzero price to him as oil kept diving to end the day at minus $37.63 a barrel.

At midnight, Shah got the devastating news: he owed Interactive Brokers $9 million. He’d started the day with $77,000 in his account.

“I was in shock,” the 30-year-old said in a phone interview. “I felt like everything was going to be taken from me, all my assets.”

Breach of zero burned some Interactive Brokers customers

To be clear, investors who were long those oil contracts had a brutal day, regardless of what brokerage they had their account in. What set Interactive Brokers apart, though, is that its customers were flying blind, unable to see that prices had turned negative, or in other cases locked into their investments and blocked from trading. Compounding the problem, and a big reason why Shah lost an unbelievable amount in a few hours, is that the negative numbers also blew up the model Interactive Brokers used to calculate the amount of margin — aka collateral — that customers needed to secure their accounts.

Thomas Peterffy, the chairman and founder of Interactive Brokers, says the journey into negative territory exposed bugs in the company’s software. “It’s a $113 million mistake on our part,” the 75-year-old billionaire said in an interview Wednesday. Since then, his firm revised its maximum loss estimate to $109.3 million. It’s been a moving target from the start; on April 21, Interactive Brokers figured it was down $88 million from the incident.

Customers will be made whole, Peterffy said. “We will rebate from our own funds to our customers who were locked in with a long position during the time the price was negative any losses they suffered below zero.”

[…]

Besides locking up because of negative prices, a second issue concerned the amount of money Interactive Brokers required its customers to have on hand in order to trade. Known as margin, it’s a vital risk measure to ensure traders don’t lose more than they can afford. For the 212 oil contracts Shah bought for 1 cent each, the broker only required his account to have $30 of margin per contract. It was as if Interactive Brokers thought the potential loss of buying at one cent was one cent, rather than the almost unlimited downside that negative prices imply, he said.

“It seems like they didn’t know it could happen,” Shah said.

But it was known industrywide that CME Group Inc.’s benchmark oil contracts could go negative. Five days before the mayhem, the owner of the New York Mercantile Exchange, where the trading took place, sent a notice to all its clearing-member firms advising them that they could test their systems using negative prices. “Effective immediately, firms wishing to test such negative futures and/or strike prices in their systems may utilize CME’s ‘New Release’ testing environments” for crude oil, the exchange said.

Interactive Brokers got that notice, Peterffy said. But he says the firm needed more time to upgrade its trading platform.

Source: How to Trade Oil With Negative Prices: Interactive Brokers – Bloomberg

Nervous, Adobe? It took 16 years, but open-source vector graphics editor Inkscape v1.0 now works properly on macOS

Open-source, cross-platform vector drawing package Inkscape has reached its version 1.0 milestone after many years of development.

Inkscape can be seen as an alternative to commercial products such as Adobe Illustrator or Serif Affinity Designer – though unlike Inkscape, neither of those run on Linux. The native format of Inkscape is SVG (Scalable Vector Graphics), the web standard.

[…]

Inkscape 1.0 is most significant for Mac users. Previous releases for macOS required a compatibility component called XQuartz, which enables applications designed for the X windowing system to run on macOS Quartz, part of Apple’s Core Graphics framework. This is no longer required and Inkscape 1.0 is now a native macOS application – though it is not all good news. The announcement noted: “This latest version is labelled as ‘preview’, which means that additional improvements are scheduled for the next versions.”

[…]

Inkscape 1.0 seems polished and professional. Adobe, which sells Illustrator on a subscription basis starting at £19 (if you inhale the rest of the Creative Cloud), will likely not be worried, but apart from the cost saving there are advantages in simpler applications that are relatively lightweight and easy to learn, as well as running well on Linux.

Source: Nervous, Adobe? It took 16 years, but open-source vector graphics editor Inkscape now works properly on macOS • The Register

Google Lens can now copy and paste handwritten notes to your computer

Google has added a very useful feature to Google Lens, its multipurpose object recognition tool. You can now copy and paste handwritten notes from your phone to your computer with Lens, though it only works if your handwriting is neat enough.

In order to use the new feature, you need to have the latest version of Google Chrome as well as the standalone Google Lens app on Android or the Google app on iOS (where Lens can be accessed through a button next to the search bar). You’ll also need to be logged in to the same Google account on both devices.

That done, simply point your camera at any handwritten text, highlight it on-screen, and select copy. You can then go to any document in Google Docs, hit Edit, and then Paste to paste the text. And voila — or, viola, depending on your handwriting.

Copy and pasting with Google Lens.
Gif: Google

In our tests, the feature was pretty hit or miss. If you don’t write neatly, you’ll definitely get some typos. But it’s still a cool feature that’s especially useful at a time when a lot of people are now working from home and relying on endless to-do lists to bring some sense of order to their day.

Source: Google Lens can now copy and paste handwritten notes to your computer – The Verge

Mac Image Capture App Eats up your space

If you’ve been wondering why the free space on your Mac keeps getting smaller, and smaller, and smaller—even if you haven’t been using your Mac all that much—there’s a quirky bug with Apple’s Image Capture app that could be to blame.

According to a recent blog post from NeoFinder, you should resist the urge to use the Image Capture app to transfer photos from connected devices to your desktop or laptop. If you do, and you happen to uncheck the “keep originals” button because you want the app to convert your .HEIC images to friendlier .JPEGs, the bug kicks in:

Apples Image Capture will then happily convert the HEIF files to JPG format for you, when they are copied to your Mac. But what is also does is to add 1.5 MB of totally empty data to every single photo file it creates! We found that massive bug by pure chance when working on further improving the metadata editing capabilities in NeoFinder, using a so-called Hex-Editor “Hex Fiend”.

They continue:

Of course, this is a colossal waste of space, especially considering that Apple is seriously still selling new Macs with a ridiculously tiny 128 GB internal SSD. Such a small disk is quickly filled with totally wasted empty data.

With just 1000 photos, for example, this bug eats 1.5 GB off your precious and very expensive SSD disk space.

We have notified Apple of this new bug that was already present in macOS 10.14.6, and maybe they will fix it this time without adding yet additional new bugs in the process.

So, what are your options? First off, you don’t have to use the Image Capture app. Unless you’re transferring a huge batch of photos over, you could just sync your iPhone or iPad’s photo library to iCloud, and do the same on your Mac, to view anything you’ve shot. If that’s not an option, you could always just AirDrop your photos over to your Mac, too, or simply use Photos instead of Image Capture (if possible).

Source: How to Keep the Image Capture App From Eating Up Space on Your Mac

PSA: New Character Bug in Messages Causing iOS Devices to Crash [Updated]

There appears to be a new character-linked bug in Messages, Mail, and other apps that can cause the iPhone, iPad, Mac, and Apple Watch to crash when receiving a specific string of characters.

Image from Twitter

In this particular case, the character string involves the Italian flag emoji along with characters in the Sindhi language, and it appears the system crash happens when an incoming notification is received with the problem-causing characters.

Based on information shared on Reddit, the character string began circulating on Telegram, but has also been found on Twitter.

These kind of device-crashing character bugs surface every so often and sometimes become widespread, leading to a significant number of people ending up with a malfunctioning iPhone, iPad, or Mac. In 2018, for example, a character string in the Telugu language circulated around the internet, crashing thousands of devices before Apple addressed the problem in an iOS update.

There is often no way to prevent these characters from causing crashes and freezes when received from a malicious person, and crashes caused through notifications often cause operating system re-springs and in some cases, a need to restore a device in DFU mode.

MacRumors readers should be aware that such a bug is circulating, and for those who are particularly concerned, as this bug appears to impact notifications, turning off notifications may mitigate the effects. Apple typically fixes these character bugs within a few days to a week.

Update: According to MacRumors reader Adam, who tested the bug on a device running iOS 13.4.5, the issue is fixed in the second beta of that update.

Source: PSA: New Character Bug in Messages Causing iOS Devices to Crash [Updated] – MacRumors

Windows 10 Update: Would You Like Deleted Files And Blue Screens With That?

As users complain of blue screens of death, deleted files and reboot loops, here’s what you need to know about this Windows 10 update.

There’s a lot of truth in the notion that you can’t please all the people all of the time, as Microsoft knows only too well. With Windows 10 now installed on more than one billion devices, there will always be a wide variation in terms of user satisfaction. One area where this variation can be seen perhaps most clearly is that of updates.

[…]

The problems those users are reporting to the Microsoft support forums and on social media have included the installation failing and looping back to restart again, the dreaded Blue Screen of Death (BSOD) following a “successful” update and computers that simply refuse to boot again afterward. Among the more common issues, in terms of complaints after a Windows 10 update, were Bluetooth and Wi-Fi connectivity related ones. But there were have also been users complaining that after a restart, all files from the C drive had been deleted.

[…]

Microsoft asks that any users experiencing problems use the Windows + F keyboard shortcut, or select Feedback Hub from the Start menu, to provide feedback so it can investigate.

More practically speaking, if you are experiencing any Windows Update issues, I would always suggest you head for the Windows Update Troubleshooter. This, more often than not, fixes any error code problems, Be warned, though, I have known it take more than one running of the troubleshooter before updates are all successfully installed, so do persevere

Source: Windows 10 Update: Would You Like Deleted Files And Blue Screens With That?