Microsoft PowerToys – customise your windows experience

Microsoft PowerToys is a set of utilities for power users to tune and streamline their Windows experience for greater productivity.

Always on Top

Always on Top screenshot

Always on Top enables you to pin windows on top of all other windows with a quick key shortcut (⊞ Win+Ctrl+T).

PowerToys Awake

PowerToys Awake screenshot

PowerToys Awake is designed to keep a computer awake without having to manage its power & sleep settings. This behavior can be helpful when running time-consuming tasks, ensuring that the computer does not go to sleep or turns off its screens.

Color Picker

ColorPicker screenshot

ColorPicker is a system-wide color picking utility activated with Win+Shift+C. Pick colors from any currently running application, the picker automatically copies the color into your clipboard in a set format. Color Picker also contains an editor that shows a history of previously picked colors, allows you to fine-tune the selected color and to copy different string representations. This code is based on Martin Chrzan’s Color Picker.

FancyZones

FancyZones screenshot

FancyZones is a window manager that makes it easy to create complex window layouts and quickly position windows into those layouts.

File Explorer add-ons

File Explorer screenshot

File Explorer add-ons enable preview pane rendering in File Explorer to display SVG icons (.svg), Markdown (.md) and PDF file previews. To enable the preview pane, select the “View” tab in File Explorer, then select “Preview Pane”.

Image Resizer

Image Resizer screenshot

Image Resizer is a Windows Shell extension for quickly resizing images. With a simple right click from File Explorer, resize one or many images instantly. This code is based on Brice Lambson’s Image Resizer.

Keyboard Manager

Keyboard Manager screenshot

Keyboard Manager allows you to customize the keyboard to be more productive by remapping keys and creating your own keyboard shortcuts. This PowerToy requires Windows 10 1903 (build 18362) or later.

Mouse utilities

Mouse utilities screenshot

Mouse utilities add functionality to enhance your mouse and cursor. With Find My Mouse, quickly locate your mouse’s position with a spotlight that focuses on your cursor. This feature is based on source code developed by Raymond Chen.

PowerRename

PowerRename screenshot

PowerRename enables you to perform bulk renaming, searching and replacing file names. It includes advanced features, such as using regular expressions, targeting specific file types, previewing expected results, and the ability to undo changes. This code is based on Chris Davis’s SmartRename.

PowerToys Run

PowerToys Run screenshot

PowerToys Run can help you search and launch your app instantly – just press the shortcut Alt+Space and start typing. It is open source and modular for additional plugins. Window Walker is now included as well. This PowerToy requires Windows 10 1903 (build 18362) or later.

Shortcut Guide

Shortcut Guide screenshot

Windows key shortcut guide appears when a user presses ⊞ Win+Shift+/ (or as we like to think, ⊞ Win+?) and shows the available shortcuts for the current state of the desktop. You can also change this setting and press and hold ⊞ Win.

Video Conference Mute

Video Conference Mute screenshot

Video Conference Mute is a quick way to globally “mute” both your microphone and camera using ⊞ Win+Shift+Q while on a conference call, regardless of the application that currently has focus. This requires Windows 10 1903 (build 18362) or later.

Source: Microsoft PowerToys | Microsoft Docs

Android will soon let you archive apps to save space

[…]

Google announced today it’s working on a new feature it estimates will reduce the space some apps take up by approximately 60 percent. Best of all, your personal data won’t be affected. The feature is called app archiving and will arrive later this year. Rather than uninstalling an app completely, it instead temporarily removes some parts of it and generates a new type of Android Package known as an archived APK. That package preserves your data until the moment you restore the app to its former form.

“Once launched, archiving will deliver great benefits to both users and developers. Instead of uninstalling an app, users would be able to ‘archive’ it – free up space temporarily and be able to re-activate the app quickly and easily,” the company said. “Developers can benefit from fewer uninstalls and substantially lower friction to pick back up with their favorite apps.”

[…]

Source: Android will soon let you archive apps to save space | Engadget

The Alternative to Web Scraping. The “lazy” programmer’s guide to… | by Doug Guthrie

One of the better sites for financial data is Yahoo Finance. This makes it a prime target for web scraping by finance enthusiasts. There are nearly daily questions on StackOverflow that reference some sort of data retrieval (oftentimes through web scraping) from Yahoo Finance.

Web Scraping Problem #1

trying to test a code that scrap from yahoo finance

I’m a python beginner but I like to learn the language by testing it and trying it. so there is a yahoo web scraper…

stackoverflow.com

The OP is trying to find the current price for a specific stock, Facebook. Their code is below:

And that code produced the following output:

the current price: 216.08

It’s a pretty simple problem with an also simple web scraping solution. However, it’s not lazy enough. Let’s look at the next one.

Web Scraping Problem #2

Web Scraping Yahoo Finance Statistics — Code Errors Out on Empty Fields

I found this useful code snippet: Web scraping of Yahoo Finance statistics using BS4 I have simplified the code as per…

stackoverflow.com

The OP is trying to extract data from the statistics tab, the stock’s enterprise value and the number of shares short. His problem actually revolves around retrieving nested dictionary values that may or may not be there, but he seems to have found a better solution as far as retrieving data.

Take a look at line 3: the OP was able to find the data he’s looking for inside a variable in the javascript:

root.App.main = { .... };

From there, the data is retrieved pretty simply by accessing the appropriate nested keys within the dictionary, data. But, as you may have guessed, there is a simpler, lazier solution.

Lazy Solution #1

Look at the URL on line 3

Output:

{
    'quoteSummary': {
        'error': None,
        'result': [{
            'price': {
                'averageDailyVolume10Day': {},
                'averageDailyVolume3Month': {},
                'circulatingSupply': {},
                'currency': 'USD',
                'currencySymbol': '$',
                'exchange': 'NMS',
                'exchangeDataDelayedBy': 0,
                'exchangeName': 'NasdaqGS',
                'fromCurrency': None,
                'lastMarket': None,
                'longName': 'Facebook, Inc.',
                'marketCap': {
                    'fmt': '698.42B',
                    'longFmt': '698,423,836,672.00',
                    'raw': 698423836672
                },
                'marketState': 'REGULAR',
                'maxAge': 1,
                'openInterest': {},
                'postMarketChange': {},
                'postMarketPrice': {},
                'preMarketChange': {
                    'fmt': '-0.90',
                    'raw': -0.899994
                },
                'preMarketChangePercent': {
                    'fmt': '-0.37%',
                    'raw': -0.00368096
                },
                'preMarketPrice': {
                    'fmt': '243.60',
                    'raw': 243.6
                },
                'preMarketSource': 'FREE_REALTIME',
                'preMarketTime': 1594387780,
                'priceHint': {
                    'fmt': '2',
                    'longFmt': '2',
                    'raw': 2
                },
                'quoteSourceName': 'Nasdaq Real Time '
                'Price',
                'quoteType': 'EQUITY',
                'regularMarketChange': {
                    'fmt': '0.30',
                    'raw': 0.30160522
                },
                'regularMarketChangePercent': {
                    'fmt': '0.12%',
                    'raw': 0.0012335592
                },
                'regularMarketDayHigh': {
                    'fmt': '245.49',
                    'raw': 245.49
                },
                'regularMarketDayLow': {
                    'fmt': '239.32',
                    'raw': 239.32
                },
                'regularMarketOpen': {
                    'fmt': '243.68',
                    'raw': 243.685
                },
                'regularMarketPreviousClose': {
                    'fmt': '244.50',
                    'raw': 244.5
                },
                'regularMarketPrice': {
                    'fmt': '244.80',
                    'raw': 244.8016
                },
                'regularMarketSource': 'FREE_REALTIME',
                'regularMarketTime': 1594410026,
                'regularMarketVolume': {
                    'fmt': '19.46M',
                    'longFmt': '19,456,621.00',
                    'raw': 19456621
                },
                'shortName': 'Facebook, Inc.',
                'strikePrice': {},
                'symbol': 'FB',
                'toCurrency': None,
                'underlyingSymbol': None,
                'volume24Hr': {},
                'volumeAllCurrencies': {}
            }
        }]
    }
}the current price: 241.63

Lazy Solution #2

Again, look at the URL on line 3

Output:

{
    'quoteSummary': {
        'result': [{
            'defaultKeyStatistics': {
                'maxAge': 1,
                'priceHint': {
                    'raw': 2,
                    'fmt': '2',
                    'longFmt': '2'
                },
                'enterpriseValue': {
                    'raw': 13677747200,
                    'fmt': '13.68B',
                    'longFmt': '13,677,747,200'
                },
                'forwardPE': {},
                'profitMargins': {
                    'raw': 0.07095,
                    'fmt': '7.10%'
                },
                'floatShares': {
                    'raw': 637754149,
                    'fmt': '637.75M',
                    'longFmt': '637,754,149'
                },
                'sharesOutstanding': {
                    'raw': 639003008,
                    'fmt': '639M',
                    'longFmt': '639,003,008'
                },
                'sharesShort': {},
                'sharesShortPriorMonth': {},
                'sharesShortPreviousMonthDate': {},
                'dateShortInterest': {},
                'sharesPercentSharesOut': {},
                'heldPercentInsiders': {
                    'raw': 0.0025499999,
                    'fmt': '0.25%'
                },
                'heldPercentInstitutions': {
                    'raw': 0.31033,
                    'fmt': '31.03%'
                },
                'shortRatio': {},
                'shortPercentOfFloat': {},
                'beta': {
                    'raw': 0.365116,
                    'fmt': '0.37'
                },
                'morningStarOverallRating': {},
                'morningStarRiskRating': {},
                'category': None,
                'bookValue': {
                    'raw': 12.551,
                    'fmt': '12.55'
                },
                'priceToBook': {
                    'raw': 1.3457094,
                    'fmt': '1.35'
                },
                'annualReportExpenseRatio': {},
                'ytdReturn': {},
                'beta3Year': {},
                'totalAssets': {},
                'yield': {},
                'fundFamily': None,
                'fundInceptionDate': {},
                'legalType': None,
                'threeYearAverageReturn': {},
                'fiveYearAverageReturn': {},
                'priceToSalesTrailing12Months': {},
                'lastFiscalYearEnd': {
                    'raw': 1561852800,
                    'fmt': '2019-06-30'
                },
                'nextFiscalYearEnd': {
                    'raw': 1625011200,
                    'fmt': '2021-06-30'
                },
                'mostRecentQuarter': {
                    'raw': 1577750400,
                    'fmt': '2019-12-31'
                },
                'earningsQuarterlyGrowth': {
                    'raw': 0.114,
                    'fmt': '11.40%'
                },
                'revenueQuarterlyGrowth': {},
                'netIncomeToCommon': {
                    'raw': 938000000,
                    'fmt': '938M',
                    'longFmt': '938,000,000'
                },
                'trailingEps': {
                    'raw': 1.434,
                    'fmt': '1.43'
                },
                'forwardEps': {},
                'pegRatio': {},
                'lastSplitFactor': None,
                'lastSplitDate': {},
                'enterpriseToRevenue': {
                    'raw': 1.035,
                    'fmt': '1.03'
                },
                'enterpriseToEbitda': {
                    'raw': 6.701,
                    'fmt': '6.70'
                },
                '52WeekChange': {
                    'raw': -0.17621362,
                    'fmt': '-17.62%'
                },
                'SandP52WeekChange': {
                    'raw': 0.045882702,
                    'fmt': '4.59%'
                },
                'lastDividendValue': {},
                'lastCapGain': {},
                'annualHoldingsTurnover': {}
            }
        }],
        'error': None
    }
}{'AGL.AX': {'Enterprise Value': '13.73B', 'Shares Short': 'N/A'}}

The lazy alternatives simply altered the request from utilizing the front-end URL to a somewhat unofficial API endpoint, which returns JSON data. It’s simpler and results in more data! What about speed though (pretty sure I promised simpler, more data, and a faster alternative)? Let’s check:

web scraping #1 min time is 0.5678426799999997
lazy #1 min time is 0.11238783999999953
web scraping #2 min time is 0.3731000199999997
lazy #2 min time is 0.0864451399999993

The lazy alternatives are 4x to 5x faster than their web scraping counterparts!

You might be thinking though, “That’s great, but where did you find those URLs?”.

The Lazy Process

Think about the two problems we walked through above: the OP’s we’re trying to retrieve the data after it had been loaded into the page. The lazier solutions went right to the source of the data and didn’t bother with the front-end page at all. This is an important distinction and, I think, a good approach whenever you’re trying to extract data from a website.

Step 1: Examine XHR Requests

An XHR (XMLHttpRequest) object is an API available to web browser scripting languages such as JavaScript. It is used to send HTTP or HTTPs requests to a web server and load the server response data back into the script. Basically, it allows the client to retrieve data from a URL without having to do a full page refresh.

I’ll be using Chrome for the following demonstrations, but other browsers will have similar functionality.

  • If you’d like to follow along, navigate to https://finance.yahoo.com/quote/AAPL?p=AAPL
  • Open Chrome’s developer console. To open the developer console in Google Chrome, open the Chrome Menu in the upper-right-hand corner of the browser window and select More Tools > Developer Tools. You can also use the shortcut Option + ⌘ + J (on macOS), or Shift + CTRL + J (on Windows/Linux).
  • Select the “Network” tab

  • Then filter the results by “XHR”

  • Your results will be similar but not the same. You should notice though that there are a few requests that contain “AAPL”. Let’s start by investigating those. Click on one of the links in the left-most column that contain the characters “AAPL”.

  • After selecting one of the links, you’ll see an additional screen that provides details into the request you selected. The first tab, Headers, provides details into the request made by the browser and the response from the server. Immediately, you should notice the Request URL in the Headers tab is very similar to what was provided in the lazy solutions above. Seems like we’re on the right track.
  • If you select the Preview tab, you’ll see the data returned from the server.

  • Perfect! It looks like we just found the URL to get OHLC data for Apple!

Step 2: Search

Now that we’ve found some of the XHR requests that are made via the browser, let’s search the javascript files to see if we can find any more information. The commonalities I’ve found with the URLs relevant to the XHR requests are “query1” and “query2”. In the top-right corner of the developer’s console, select the three vertical dots and then select “Search” in the dropdown.

Search for “query2” in the search bar:

Select the first option. An additional tab will pop-up containing where “query2” was found. You should notice something similar here as well:

It’s the same variable that web scraping solution #2 targeted to extract their data. The console should give you an option to “pretty-print” the variable. You can either select that option or copy and paste the entire line (line 11 above) into something like https://beautifier.io/ or if you use vscode, download the Beautify extension and it will do the same thing. Once it’s formatted appropriately, paste the entire code into a text editor or something similar and search for “query2” again. You should find one result inside something called “ServicePlugin”. That section contains the URLs that Yahoo Finance utilizes to populate data in their pages. The following is taken right out of that section:

"tachyon.quoteSummary": {"path": "\u002Fv10\u002Ffinance\u002FquoteSummary\u002F{symbol}","timeout": 6000,"query": ["lang", "region", "corsDomain", "crumb", "modules",     "formatted"],"responseField": "quoteSummary","get": {"formatted": true}},

This is the same URL that is utilized in the lazy solutions provided above.

TL;DR

  • While web scraping can be necessary because of how a website is structured, it’s worth the effort investigating to see if you can find the source of the data. The resulting code is simpler and more data is extracted faster.
  • Finding the source of a website’s data is often found by searching through XHR requests or by searching through the site’s javascript files utilizing your browser’s developer console.

More Information

  • What if you can’t find any XHR requests? Check out The Alternative to Web Scraping, Part II: The DRY approach to retrieving web data

The Alternative to Web Scraping, Part II

The DRY approach to retrieving web data

towardsdatascience.com

  • If you’re interested specifically in the Yahoo Finance aspect of this article, I’ve written a python package, yahooquery, that exposes most of those endpoints in a convenient interface. I’ve also written an introductory article that describes how to use the package as well as a comparison to a similar one.

The (Unofficial) Yahoo Finance API

A Python interface to endless amounts of data

towardsdatascience.com

  • Please feel free to reach out if you have any questions or comments

Source: The Alternative to Web Scraping. The “lazy” programmer’s guide to… | by Doug Guthrie | Towards Data Science

Developer Bricks Open-Source Apps Colors and Faker – used in 20k projects – no reason given, world of crazy

The eccentric developer behind two immensely popular open-source NPM coding libraries recently corrupted them both with a series of bizarre updates—a decision that has led to the bricking of droves of projects that relied upon them for support.

Marak Squires is the creator behind the popular JavaScript libraries Faker and Colors—the likes of which are key instruments for developers the world over. To give you an idea of how widely used these are, Colors reportedly sees more than 20 million downloads a week and Faker gets about 2 million. Suffice it to say, they see a lot of use.

However, Squires recently made the bizarre decision to mess all that up when he executed a number of malicious updates that sent the libraries haywire—taking a whole lot of dependent projects with it. In the case of Colors, Squires sent an update that caused its source code to go on an endless repeating loop. This caused apps using it to emit the text “Liberty Liberty Liberty,” followed by a splurge of meaningless, garbled data, effectively crippling their functionality. With Faker, meanwhile, a new update was recently introduced that basically nuked the library’s entire code. Squires subsequently announced he would no longer be maintaining the program “for free.”

The whole episode, which sent developers that rely on both programs into panic mode, appears to have been first observed by researchers with Snyk, an open-source security company, as well as BleepingComputer.

[…]

The most perplexing thing about this whole episode is that it’s not entirely clear why Squires did this. Some online commentators attributed the decision to a blog post he published in 2020, in which he railed against big companies’ use of open-source code from developers like himself. It’s true that corporate America tends to cut fiscal corners by exploiting freely available coding tools (just look at the recent log4j debacle, for example), though, if you’re an open-source coder, you would ostensibly know and expect that.

Indeed, the way in which Squires blitzed his libraries seems to defy simple explanation. For one thing, the commits that messed with the libraries were accompanied by odd text files that, in the case of the Faker update, referenced Aaron Swartz. Swartz is a well-known computer programmer who was found dead in his apartment in 2013 of an apparent suicide. Squires also made a number of other odd public references to Swartz around the time of the malicious commits.

[…]

Source: Developer Bricks Open-Source Apps Colors and Faker, Causes Chaos

Ventoy – add an iso to usb drive and boot it (or any other iso on it) up without any configuration

Ventoy is an open source tool to create bootable USB drive for ISO/WIM/IMG/VHD(x)/EFI files.
With ventoy, you don’t need to format the disk over and over, you just need to copy the ISO/WIM/IMG/VHD(x)/EFI files to the USB drive and boot them directly.
You can copy many files at a time and ventoy will give you a boot menu to select them (screenshot).
x86 Legacy BIOS, IA32 UEFI, x86_64 UEFI, ARM64 UEFI and MIPS64EL UEFI are supported in the same way.
Most type of OS supported (Windows/WinPE/Linux/ChromeOS/Unix/VMware/Xen…)
770+ image files are tested (list),     90%+ distros in distrowatch.com supported (details),

Source: Ventoy

Discord is quietly building an app empire of bots – The Verge

Discord has been quietly building its own app platform based on bots over the past few years. More than 30 percent of Discord servers now use bots, and 430,000 of them are used every week across Discord by its 150 million monthly active users. Now that bots are an important part of Discord, the company is embracing them even further with the ability to search and browse for bots on Discord.

A new app discovery feature will start showing up in Discord in spring 2022. Verified apps and bots (which total around 12,000 right now) will be discoverable through this feature. Developers will be able to opt into discoverability, once they’re fully prepared for a new influx of users that can easily find their bots.

Bots are powerful on Discord, offering a range of customizations for servers. Discord server owners install bots on servers to help moderate them or offer mini-games or features to their communities. There are popular bots that will spit out memes on a daily basis, bots that help you even create your own bot, or music bots that let Discord users listen to tunes together.

[…]

Source: Discord is quietly building an app empire of bots – The Verge

Apple’s macOS Monterey memory leak blamed on custom cursors

Sleuthing leads to suspected RAM-gobbling culpri

Apple’s macOS Monterey, the iGiant’s latest desktop operating system release, turns out to have an insatiable appetite for memory if you use certain apps.

Shortly after the OS update was released on October 25, Apple customers – at least those who avoided installation woes – began to notice that certain apps gobbled an excessive amount of memory, so much so the programs would crash or quit.

There were reports of this sort for Adobe Creative Cloud apps, Microsoft Office, Cinema 4D, and Pages, to name but a few.

Mozilla’s Firefox was also affected – 79GB of memory is a lot, even for a browser known for memory consumption. Following an October 10 bug report, filed back just prior to macOS Monterey’s release, Mozillans determined that Apple’s latest operating system was afflicted by a memory leak that occurs when an app uses a customized cursor.

“On macOS 12 Monterey, using a non-standard cursor size or colors causes a large memory leak in Firefox,” the bug report explains. “Firefox version 94 includes a fix that reduces the memory leak, but the problem can still occur. The problem has been reported to Apple and a fix is expected in a future update to macOS 12.”

[…]

Source: Apple’s macOS Monterey memory leak blamed on custom cursors

Palm OS: Reincarnate – Pumpkin OS

[pmig96] loves PalmOS and has set about on the arduous task of reimplementing PalmOS from scratch, dubbing it Pumpkin OS. Pumpkin OS can run on x86 and ARM at native speed as it is not an emulator. System calls are trapped and intercepted by Pumpkin OS. Because it doesn’t emulate, Palm apps currently need to be recompiled for x86, though it’s hoped to support apps that use ARMlets soon. Since there are over 800 different system traps in PalmOS, he hasn’t implemented them all yet.

Generally speaking, his saving grace is that 80% of the apps only use 20% of the API. His starting point was a script that took the headers from the PalmOS SDK and converted them into functions with just a debug message letting him know that it isn’t implemented yet and a default return value. Additionally, [pmig96] is taking away some of the restrictions on the old PalmOS, such as being limited to only one running app at a time.

As if an x86 desktop version wasn’t enough, [pmig96] recompiled Pumpkin OS to a Raspberry Pi 4 with a ubiquitous 3.5″ 320×480 TFT SPI touch screen. Linux maps the TFT screen to a frame buffer (dev/fb0 or dev/fb1). He added a quick optimization to only draw areas that have changed so that the SPI writes could be kept small to keep the frame rate performance.

[pmig96] isn’t the only one trying to breathe some new life into PalmOS, and we hope to see more progress on PumpkinOS in the future.

Source: Palm OS: Reincarnate | Hackaday

Intel open-sources AI-powered tool to spot bugs in code

Intel today open-sourced ControlFlag, a tool that uses machine learning to detect problems in computer code — ideally to reduce the time required to debug apps and software. In tests, the company’s machine programming research team says that ControlFlag has found hundreds of defects in proprietary, “production-quality” software, demonstrating its usefulness.

[…]

ControlFlag, which works with any programming language containing control structures (i.e., blocks of code that specify the flow of control in a program), aims to cut down on debugging work by leveraging unsupervised learning. With unsupervised learning, an algorithm is subjected to “unknown” data for which no previously defined categories or labels exist. The machine learning system — ControlFlag, in this case — must teach itself to classify the data, processing the unlabeled data to learn from its inherent structure.

ControlFlag continually learns from unlabeled source code, “evolving” to make itself better as new data is introduced. While it can’t yet automatically mitigate the programming defects it finds, the tool provides suggestions for potential corrections to developers, according to Gottschlich.

[…]

AI-powered coding tools like ControlFlag, as well as platforms like Tabnine, Ponicode, Snyk, and DeepCode, have the potential to reduce costly interactions between developers, such as Q&A sessions and repetitive code review feedback. IBM and OpenAI are among the many companies investigating the potential of machine learning in the software development space. But studies have shown that AI has a ways to go before it can replace many of the manual tasks that human programmers perform on a regular basis.

Source: Intel open-sources AI-powered tool to spot bugs in code | VentureBeat

Knowbots – the first way to search, relevant again?

Back before the search engines, the internet was relatively small but still growing enough that it needed searching. Computers were slower so the speed with which we expect results from Google was impossible. In order to search the internet, Gopher users had Archie, Veroncia and Jughead as well as lists of known Gopher servers – which linked to more lists of known Gopher servers. If you are interested in this system, it’s still online and a good start to look is http://gopher.floodgap.com/gopher/. Another way to search, however, was through Knowbots. These consist of several components

  • A server (the “Knowbot Operating System”, or KOS) that runs on a host to enable it to run Knowbot programs. (In our terminology, this makes it a “Knowbot service station”.)
  • A distributed, replicated namespace server (the “worldroot server”) which provides the shared namespace used by Knowbot programs for navigation and environmental inquiries.
  • Tools to manage, create, submit, monitor and debug Knowbot programs.
  • Tools to monitor and control collections of Knowbot service stations.
  • A library of Python modules for use by Knowbot programs as well as by the support tools.

Usually to access a knowbot you telnet to a certain port and issue commands (and wait) or you email them and wait for a response.

The first knowbots up were Knowbot Information Services (KIS) to search for people.

The Knowbot Information Service (KIS) is another “white pages” service that performs a broad name search, checking MCI Mail, the X.500 White Pages Pilot Project, various Whois servers at various organizations (Whois is yet another directory service), and the UNIX finger command. It can be used either as a client program resident on your local machine, through e-mail, or by Telneting to a public server.

KIS uses subprograms called Knowbots to search for information. Each Knowbot looks for specific information from a site and reports back to the main program with the results.

Two hosts running KIS servers are info.cnri.reston.va.us and regulus.cs.bucknell.edu. You can access either one by electronic mail (send mail to netaddress@nri.reston.va.us, for instance) or using Telnet. (If you Telnet to a KIS server, you need to request port 185: instead of typing telnet regulus.cs.buckness.edu, you’d actually type telnet regulus.cs.buckness.edu 185.)

Because searching can take several minutes, I prefer to use the e-mail method; once KIS knows the results of the search, it mails them back to you.

In the body of your mail message to netaddress, put names of your associates, one per line. You may use first and last names or a login if you know them. Sending johnson will search the default list of directory servers for user johnson. Because KIS checks a predefined set of services, you do not need to supply an organization name to check for.

KIS also includes commands for narrowing your search and searching for an organization. For more help, include the word man in your e-mail to KIS or your interactive session.

Source: https://www.savetz.com/yic/YIC04FI_23.html

The University of Illinois had the following knowbot:

INTERNET ADDRESSES:
	nri.reston.va.us 185
	132.151.1.1 185
	sol.bucknell.edu 185
	134.82.1.8 185

DESCRIPTION:
	Knowbot is an useful information service for locating
someone with an Internet address. Knowbot does not
have its own "white pages" recording internet users like a
telephone book. However, Knowbot can access to other
information services that have their own "white pages"
and search for you. Commands to operate knowbot service
are easy but not very user friendly to first time users.

SERVICES:

Knowbot serves as a gateway for internet users in remote hosts by
sending searching commands to find someone in internet, receiving the
searching results and presenting results in a uniform format for the
user. However, very often the Knowbot search is fruitless, because
of the incomplete information of internet users.

Listed below are remote host accessible to Knowbot. They all have
their own users information pools.
	nic
	mcimail
	ripe
	x500
	finger
	nwhois
	mitwp
	quipu-country
	quipu-org
	ibm-whois
	site-contacts

LOGIN SEQUENCE:
	At system prompt, type 	telnet nri.reston.va.us 185
	systemprompt> 		telnet nri.reston.va.us 185

EXIT SEQUENCE:
	To exit Knowbot, type RquitS at the Knowbot prompt.
	 >quit
	
ACCESS COMMANDS:
	To enact command, type the command at Knowbot
	prompt,
	 >[command]
	 e.g. >help

	Access commands of Knowbot include:
	 >help
		to print a summary of Knowbot commands on
		screen

	 >man
		to print an on-line manual of Knowbot on screen

	 >quit
		to exit Knowbot information system

	 >[name]
		to start searching a name of person with internet
		address
		e.g. >Krol

	 >services
		to list all Knowbot accessible hosts

	 >service [hostname]
		to narrow the search service on a specific host
		e.g. > service nic

	 >org [organization]
		to narrow the search service on a specific
		organization
		e.g. >org University of Illinois

	 >country [country name]
		to narrow the search service on a specific country
		e.g. >country US

SAMPLE LOGIN:
	1. telnet to Knowbot at system prompt
		systemprompt> telnet nri.reston.va.us 185
		
	2. specify the organization of the person to be searched
		> org university of Illinois

	 and/or you may specify the host service
		> service nic
	
	3. type in the name to start searching
		> krol

	4. You may get the following result:

	Name:		Ed Krol
	Organization: 	University of Illinois
	Address:	 	Computing and Communications Service
			 	Office,195 DCL, 1304 West Springfield
			 	Avenue
	City:	 		Urbana
	State:	 	IL
	Country:	 	US
	Zip:	 	 	61801-4399
	Phone:	 	(217) 333-7886
	E-Mail:	 	Krol@UXC.CSO.UIUC.EDU
	Source:	 	whois@nic.ddn.mil
	Ident:	 	EK10
	Last Updated:	27-Nov-91

	5. exit Knowbot	
	 > quit

FRIENDLY ADVICE:
	Since there are no complete recordings of all Internet
	users, it is better not to expect to locate every internaut
	through Knowbot. However, the more you know about
	the person you want to locate, the easier the searching
	process, because you can narrow the search by specifying
	organization, country, or host of the person to be
	searched, which will save you a lot of time.

DOCUMENT AUTHORS: 	Hsien Hu
	 			Irma Garza

Source: https://www.ou.edu/research/electron/internet/knowbot.htm

These knowbots were developed before and during 1995 – NASA had plans for the Iliad knowbot (which gave me much better results than google, altavista, askjeeves or the other search engines of the time for specific tasks) back then.

https://ntrs.nasa.gov/api/citations/19970006511/downloads/19970006511.pdf

or https://www.linkielist.com/wp-content/uploads/2021/10/NASA-knowbots-iliad-19970006511-1.pdf

Iliad was developed as a resource for blind people but it waas realised that it worked well for teachers too. By sending an email to iliad@prime.jsc.nasa.gov you would receive the following reply:

about:blankPreformatted: Change block type or styleAdd titleKnowbots – the first way to search, relevant again?

Back before the search engines, the internet was relatively small but still growing enough that it needed searching. Computers were slower so the speed with which we expect results from Google was impossible. In order to search the internet, Gopher users had Archie, Veroncia and Jughead as well as lists of known Gopher servers – which linked to more lists of known Gopher servers. If you are interested in this system, it’s still online and a good start to look is http://gopher.floodgap.com/gopher/. Another way to search, however, was through Knowbots. These consist of several components

  • A server (the “Knowbot Operating System”, or KOS) that runs on a host to enable it to run Knowbot programs. (In our terminology, this makes it a “Knowbot service station”.)
  • A distributed, replicated namespace server (the “worldroot server”) which provides the shared namespace used by Knowbot programs for navigation and environmental inquiries.
  • Tools to manage, create, submit, monitor and debug Knowbot programs.
  • Tools to monitor and control collections of Knowbot service stations.
  • A library of Python modules for use by Knowbot programs as well as by the support tools.

Usually to access a knowbot you telnet to a certain port and issue commands (and wait) or you email them and wait for a response.

The first knowbots up were Knowbot Information Services (KIS) to search for people.

The Knowbot Information Service (KIS) is another “white pages” service that performs a broad name search, checking MCI Mail, the X.500 White Pages Pilot Project, various Whois servers at various organizations (Whois is yet another directory service), and the UNIX finger command. It can be used either as a client program resident on your local machine, through e-mail, or by Telneting to a public server.

KIS uses subprograms called Knowbots to search for information. Each Knowbot looks for specific information from a site and reports back to the main program with the results.

Two hosts running KIS servers are info.cnri.reston.va.us and regulus.cs.bucknell.edu. You can access either one by electronic mail (send mail to netaddress@nri.reston.va.us, for instance) or using Telnet. (If you Telnet to a KIS server, you need to request port 185: instead of typing telnet regulus.cs.buckness.edu, you’d actually type telnet regulus.cs.buckness.edu 185.)

Because searching can take several minutes, I prefer to use the e-mail method; once KIS knows the results of the search, it mails them back to you.

In the body of your mail message to netaddress, put names of your associates, one per line. You may use first and last names or a login if you know them. Sending johnson will search the default list of directory servers for user johnson. Because KIS checks a predefined set of services, you do not need to supply an organization name to check for.

KIS also includes commands for narrowing your search and searching for an organization. For more help, include the word man in your e-mail to KIS or your interactive session.Source: https://www.savetz.com/yic/YIC04FI_23.html

The University of Illinois had the following knowbot:

INTERNET ADDRESSES:
nri.reston.va.us 185
132.151.1.1 185
sol.bucknell.edu 185
134.82.1.8 185

DESCRIPTION:
Knowbot is an useful information service for locating
someone with an Internet address. Knowbot does not
have its own "white pages" recording internet users like a
telephone book. However, Knowbot can access to other
information services that have their own "white pages"
and search for you. Commands to operate knowbot service
are easy but not very user friendly to first time users.

SERVICES:

Knowbot serves as a gateway for internet users in remote hosts by
sending searching commands to find someone in internet, receiving the
searching results and presenting results in a uniform format for the
user. However, very often the Knowbot search is fruitless, because
of the incomplete information of internet users.

Listed below are remote host accessible to Knowbot. They all have
their own users information pools.
nic
mcimail
ripe
x500
finger
nwhois
mitwp
quipu-country
quipu-org
ibm-whois
site-contacts

LOGIN SEQUENCE:
At system prompt, type telnet nri.reston.va.us 185
systemprompt> telnet nri.reston.va.us 185

EXIT SEQUENCE:
To exit Knowbot, type RquitS at the Knowbot prompt.
>quit

ACCESS COMMANDS:
To enact command, type the command at Knowbot
prompt,
>[command]
e.g. >help

Access commands of Knowbot include:
>help
to print a summary of Knowbot commands on
screen

>man
to print an on-line manual of Knowbot on screen

>quit
to exit Knowbot information system

>[name]
to start searching a name of person with internet
address
e.g. >Krol

>services
to list all Knowbot accessible hosts

>service [hostname]
to narrow the search service on a specific host
e.g. > service nic

>org [organization]
to narrow the search service on a specific
organization
e.g. >org University of Illinois

>country [country name]
to narrow the search service on a specific country
e.g. >country US

SAMPLE LOGIN:
1. telnet to Knowbot at system prompt
systemprompt> telnet nri.reston.va.us 185

2. specify the organization of the person to be searched
> org university of Illinois

and/or you may specify the host service
> service nic

3. type in the name to start searching
> krol

4. You may get the following result:

Name: Ed Krol
Organization: University of Illinois
Address: Computing and Communications Service
Office,195 DCL, 1304 West Springfield
Avenue
City: Urbana
State: IL
Country: US
Zip: 61801-4399
Phone: (217) 333-7886
E-Mail: Krol@UXC.CSO.UIUC.EDU
Source: whois@nic.ddn.mil
Ident: EK10
Last Updated: 27-Nov-91

5. exit Knowbot
> quit

FRIENDLY ADVICE:
Since there are no complete recordings of all Internet
users, it is better not to expect to locate every internaut
through Knowbot. However, the more you know about
the person you want to locate, the easier the searching
process, because you can narrow the search by specifying
organization, country, or host of the person to be
searched, which will save you a lot of time.

DOCUMENT AUTHORS: Hsien Hu
Irma Garza

Source: https://www.ou.edu/research/electron/internet/knowbot.htm


These knowbots were developed before and during 1995 – NASA had plans for the Iliad knowbot (which gave me much better results than google, altavista, askjeeves or the other search engines of the time for specific tasks) back then.

Information Infrastructure Technology Applications (IITA) Program Annual K-12 Workshop April 11 – 13 1995 (PDF)

Iliad was developed as a resource for blind people but it waas realised that it worked well for teachers too. By sending an email to iliad@prime.jsc.nasa.gov you would receive the following reply:

Your question has been received and is being processed by the ILIAD
knowbot.

Responses will be sent to the email address provided in the heading.

You can now specify

*outputtype: dwl

(document with links) to receive documents with embedded hot links in the
documents.

For example:

Subject: iliad query

*outputtype: dwl
?q: nasa jsc ltp

An example query response would consist of the documents found and a summary. It was surpisingly well curated. Here is an example summary:

Dear ILIAD User:

This is a summary of the documents sent to you by ILIAD in response to
your email question.  The number order of the summarized documents
corresponds to the number on the individual documents you received.

Your question was:


internet bots automated retrieval=20


Output Type: documents

 1)
"http://navigation.us.realnames.com/resolver.dll?action=resolution&charset
=utf-8&realname=TEKTRAN+%3A+USDA+Technology+Transfer+Automated+Retrieval+S
ystem&providerid=154"
    TEKTRAN : USDA Technology Transfer  Automated  Retrieval
    System   TEKTRAN : USDA Technology Transfer  Automated
    Retrieval System: TEKTRAN : USDA Technology Transfer
    Automated  Retrieval System Click on this
    Internet  Keyword to go directly to the TEKTRAN : USDA
    Technology Transfer  Automated  Retrieval System Web
    site. 1000,http://www.nal.usda.gov/ttic/tektran/tektran.html
    ( Internet Keyword).+\( (\S+).*\)  OCLC
    Internet  Cataloging Project Colloquium Field Report By
    Amanda Xu MIT Libraries When we joined the OCLC Intercat Project, our
    first concern was the feasibility of using MARC formats and AACR2 for
    describing and accessing  Internet  resources of various
    types. 999,http://www.oclc.org/oclc/man/colloq/xu.htm (
    WebCrawler)

 2) "http://www.botspot.com/faqs/article3.htm" BotSpot ® : The Spot
    for all  Bots  & Intelligent Agents   search botspot free
    newsletter  internet.com  internet.commerce PAGE 3 OF
    6 Beyond Browsing... Offline Web Agents by Joel T. Patz is an
excellent
    article comparing the current Offline Web Agents and giving detailed
    explanations and instructions including head-to-head feature
    charts and downloading
    sites. 888,http://www.botspot.com/faqs/article3.htm (
    WebCrawler)

 3) "http://www.insead.fr/CALT/Encyclopedia/ComputerSciences/Agents/"
    Agent Technologies   Agent
    technologies
789,http://www.insead.fr/CALT/Ency...pedia/ComputerSciences/Agents/
    ( WebCrawler)

 4) "http://lonestar.texas.net/disclaimers/aup.html" Acceptable
    Use Policy   Texas.Net Acceptable Use Policy In order for Texas
    Networking to keep your service the best it can be, we have a set of
    guidelines known as our "Acceptable Use Policy." These guidelines
    apply to all customers equally and covers dialup account usage as well
    as mail, news, and other
    services. 480,http://lonestar.texas.net/disclaimers/aup.html
    ( WebCrawler)

 5)
"http://navigation.us.realnames.com/resolver.dll?action=resolution&charset
=utf-8&realname=Automated+Traveller%27s+Internet+Site&providerid=154"
    Automated  Traveller's  Internet  Site
    Automated  Traveller's  Internet  Site: The
    Automated  Traveller-Discounted Airfares
    Worldwide Click on this  Internet  Keyword to go directly to
    the  Automated  Traveller's  Internet  Site Web
    site. 333,http://www.theautomatedtraveller.com/ ( Internet
    Keyword).+\( (\S+).*\)  This site provides you with an
    assortment of search devices along with their brief descriptions.
    Also, you will find recommendations for using specific research tools
    and their combinations that we have found more productive in our own
    research. 284,http://www.brint.com/Sites.htm ( WebCrawler)


The following references were not verified for uniqueness.
You can retrieve any these references by sending ILIAD an email
request in the following format:

        Subject: get url
        url: <the url name>

for example:

        Subject: get url
        url: http://prime.jsc.nasa.gov/iliad/index.html


If you want embedded hot links in the document add "*outputtype: dwl"
before the first url: line

for example:

	Subject: get url

	*outputtype: dwl
	url: http://prime.jsc.nasa.gov/index.html


 1) "http://gsd.mit.edu/~history/search/engine/history.html" A
    History of Search Engines   What's a Robot got to do with the
    Internet ? Other types of robots on the  Internet  push
    the interpretation of the  automated  task definition. The
    chatterbot variety is a perfect
    example. 681,http://gsd.mit.edu/~history/search/engine/history.html
    ( WebCrawler)

 2)
"http://navigation.us.realnames.com/resolver.dll?action=resolution&charset
=utf-8&realname=Automated+Information+Retrieval+Systems+%28AIRS%29&provide
rid=154"
    Automated  Information Retrieval Systems (AIRS)
    Automated  Information Retrieval Systems (AIRS):
    Automated  Information Retrieval Systems (AIRS) Click on
    this  Internet  Keyword to go directly to the  Automated
    Information Retrieval Systems (AIRS) Web
    site. 666,http://www.re-airs.com/ ( Internet Keyword)
    .+\( (\S+).*\)  The  Internet  Communications
    LanguageTM News Events Technology 30-October-1999: Linux World A
    REBOL Incursion It's not a scripting language, not a programming
    language -- and not a new Amiga,
    either. 584,http://www.rebol.com/inthenews.html (
    WebCrawler)

 3)
"http://www.pcai.com/pcai/New_Home_Page/ai_info/intelligent_agents.html"
    PC AI - Intelligent Agents   Requires Netscape 2.0 or later
    compatibility. Intelligent Agents execute tasks on behalf of a
    business process, computer application, or an
    individual.
384,http://www.pcai.com/pcai/New_H...i_info/intelligent_agents.html
    ( WebCrawler)

 4) "http://www.rci.rutgers.edu/~brcoll/search_engines.htm"
    Searching with Style   Motto for the Day: Hypberbole n:
    extravagant exaggeration; see also computer industry. Last Updated:
    November 10, 1996 Very few aspects of the Web are developing as fast
as
    the search engines, except for the sheer volume of
    information. 186,http://www.rci.rutgers.edu/~brcoll/search_engines.htm
    ( WebCrawler)

 5) "http://www.aci.net/kalliste/echelon/ic2000.htm" STOA Report:
    Interception Capabilities 2000   Interception Capabilities
    2000 Report to the Director General for Research of the European
    Parliament (Scientific and Technical Options Assessment programme
    office) on the development of surveillance technology and risk of
    abuse of economic
    information. 89,http://www.aci.net/kalliste/echelon/ic2000.htm
    ( WebCrawler)


Thank you for using ILIAD.  This marks the end of your results.


5 files passed analysis.

Search performed by metacrawler.


End of ILIAD Session ID: SEN38899
---------------------------------------------------------

Illiad could be searched through tenet and msstate and a few other providers:

You can use the well-known e-mail meta-finder
ILIAD (Internet Library Information Access Device) knowbot,
which can be found at <iliad@msstate.edu> or
<iliad@algol.jsc.nasa.gov>. You will receive instructions at the request of
“startiliad” in the subject of the message.

The query sent to the ILIAD server will be sent to several largest
search servers (eg Altavista, Excite, InfoSeek, Lycos,
WebCrawler, …) removes duplicate and overly
irrelevant documents from the results , and
sends the already downloaded pages (without graphics) back within 15 – 20 minutes. You can also try ILIAD on the WWW,
via the form at
<http://www.tenet.edu/library/iliad.html>.

A list of email services can be found here but is copied as these pages are going down pretty quickly

Get webpages via eMail

Several years ago when the Internet connections where slow and the “www” just invented, many people just got a to email restricted access to the Internet. That’s the origin of the “Agora” and “www4email” software. Some of these email robots are still available and we can use them to bypass Internet censorship. The best thing would be to subscribe to a free email provider which allows SSL-connections (like https://www.fastmail.fm/, https://www.ziplip.com/, https://www.hushmail.com/, https://www.safe-mail.net/, https://www.mail2world.com/, https://www.webmails.com/ e.t.c) and use that account with the email addresses below. I put the field where you have to input the URL in brackets. It still works great for text. But sure there are big problems with images or even DHTML, JavaScript, Java, Flash e.t.c. Also other services besides www are possible, for a very good tutorial on this see ftp://rtfm.mit.edu/pub/usenet/news.answers/internet-services/access-via-email. There is also a web based service under http://www.web2mail.com/. I again used www.web.freerk.com/c/ as an example because the URL is all time accessible and the ‘.com’ in the original Google address is often considered as a .com DOS-file by some computers and censorship systems. The www4mail software (http://www.www4mail.org/) is newer than the Agora software.
A eMail with just “help” in the subject line will get you a tutorial on howto use the service properly.

page@grabpage.org
[SUBJECT] url: http://www.web.freerk.com/c/
info: http://www.grabpage.org/

frames@pagegetter.com
[BODY] http://www.web.freerk.com/c/
info: http://www.pagegetter.com/
web@pagegetter.com
[BODY] http://www.web.freerk.com/c/
info: http://www.pagegetter.com/

webgate@vancouver-webpages.com
[BODY] get http://www.web.freerk.com/c/
info: http://vancouver-webpages.com/webgate/

webgate@vancouver-webpages.com
[BODY] mail http://www.web.freerk.com/c/
info: http://vancouver-webpages.com/webgate/

www4mail@wm.ictp.trieste.it
[BODY] http://www.web.freerk.com/c/
info: http://www.ictp.trieste.it/~www4mail/

www4mail@access.bellanet.org
[BODY] http://www.web.freerk.com/c/
info: http://www.bellanet.org/email.html

www4mail@kabissa.org
[BODY] http://www.web.freerk.com/c/
info: http://www.kabissa.org/members/www4mail/

www4mail@ftp.uni-stuttgart.de
[BODY] http://www.web.freerk.com/c/

www4mail@collaborium.org
[BODY] http://www.web.freerk.com/c/
info: http://www.collaborium.org/~www4mail/

binky@junoaccmail.org
[BODY] url http://www.web.freerk.com/c/
info: http://boas.anthro.mnsu.edu/

iliad@prime.jsc.nasa.gov
[SUBJECT] GET URL
[BODY] url:http://www.web.freerk.com/c/
info: http://prime.jsc.nasa.gov/iliad/

Google Search via eMail:
google@capeclear.com
[Subject] search keywords
info: http://www.capeclear.com/google/

More info: http://www.cix.co.uk/~net-services/mrcool/stats.htm
ftp://rtfm.mit.edu/pub/usenet/news.answers/internet-services/access-via-email

Information by Fravia on building them can be found https://www.theoryforce.com/fravia/searchlores/bots.htm – there seems to be a copy up to phase five at http://www.woodmann.com/fravia/botstart.htm

A complete knowbot software suite can be downloaded from https://www.cnri.reston.va.us/home/koe/index.html. This was written by the CNRI [1].

Knowbot programming: System support for mobile agents is another useful overview

A short history (in Czech) can be found here: Vše, co jste chtěli vědět o Internetu… nebojte se zeptat!

Today with the volume of information on the web being so huge, there may be a market for a resurgence of this kind of software. Google realises that it’s fast become impossible to find what you are looking for accurately and has responded by having specific search engines (eg scholar, books, images, shopping, etc) for specific tasks. However for specific fields this is still way too large. A way to handle this would be to have semi-curated search sources added to a knowbot within a very specific field (eg energy, psychology, hardware) allowing you to search easily within expertise. If you can then heuristically detect which field is being searched you can direct the searcher to that specific knowbot.

This Site Can Tell You If Anyone Else Has Taken Pictures With Your Camera

[…]

This website provides an avenue for investigation, and offers a sliver of hope. It’s a tiny sliver of hope to be sure, but it’s better than no hope at all.

It works like this: You upload a picture taken with the missing camera to stolencamerafinder.com, which then uses the camera’s serial number (saved in the photo’s EXIF data) to crawls the internet in search of other photos taken with that same camera. If it finds a match, you may have a lead on where your camera ended up.

From there, you can try to track down and contact the “new owner” via email to request your camera’s return, file a report with the authorities, or devote your life to hunting the thief yourself, John Wick style.

None of these options is likely to result in the return of your Nikon, but it has worked in the past, and maybe it will help you find closure. Maybe just knowing what the hell happened to your camera is the best you can hope for? And the site also provides a database of lost cameras all over the world, so you’ll at least know you’re not alone.

[…]

Source: This Site Can Tell You If Anyone Else Has Taken Pictures With Your Camera

Kumu – network mapping tool

  • Stakeholder mapping

    Explore the complex web of loyalties, interests, influence, and alignment of key players around important issues.

  • Systems mapping

    Understand and engage complex systems more effectively using systems maps and causal loop diagrams.

  • Social network mapping

    Capture the structure of personal networks and reveal key players. Visualize the informal networks within your organization and see how work really gets done.

  • Community asset mapping

    Keep track of the evolving relationships among community members and resources.

  • Concept mapping

    Brainstorm complex ideas and relate individual concepts to the bigger picture. Unfold convoluted series of events using Lombardi diagrams.

Source: Kumu

Debian 11 “bullseye” released

After 2 years, 1 month, and 9 days of development, the Debian project is proud to present its new stable version 11 (code name bullseye), which will be supported for the next 5 years thanks to the combined work of the Debian Security team and the Debian Long Term Support team.

Debian 11 bullseye ships with several desktop applications and environments. Amongst others it now includes the desktop environments:

  • Gnome 3.38,
  • KDE Plasma 5.20,
  • LXDE 11,
  • LXQt 0.16,
  • MATE 1.24,
  • Xfce 4.16.

This release contains over 11,294 new packages for a total count of 59,551 packages, along with a significant reduction of over 9,519 packages which were marked as obsolete and removed. 42,821 packages were updated and 5,434 packages remained unchanged.

bullseye becomes our first release to provide a Linux kernel with support for the exFAT filesystem and defaults to using it for mount exFAT filesystems. Consequently it is no longer required to use the filesystem-in-userspace implementation provided via the exfat-fuse package. Tools for creating and checking an exFAT filesystem are provided in the exfatprogs package.

Most modern printers are able to use driverless printing and scanning without the need for vendor specific (often non-free) drivers. bullseye brings forward a new package, ipp-usb, which uses the vendor neutral IPP-over-USB protocol supported by many modern printers. This allows a USB device to be treated as a network device. The official SANE driverless backend is provided by sane-escl in libsane1, which uses the eSCL protocol.

[…]

Source: Debian — News — Debian 11 “bullseye” released

How TikTok serves you content you love – simple, actually

A new video investigation by the Wall Street Journal finds the key to TikTok’s success in how the short-video sharing app monitors viewing times.

Why it matters: TikTok is known for the fiendishly effective way that it selects streams of videos tailored to each user’s taste. The algorithm behind this personalization is the company’s prize asset — and, like those that power Google and Facebook, it’s a secret.

How they did it: WSJ created a batch of individualized dummy accounts to throw at TikTok and test how it homed in on each fake persona’s traits.

What they found: TikTok responds most sensitively to a single signal — how long a user lingers over a video. It starts by showing new users very popular items, and sees which catch their eyes.

  • The TikTok algorithm works so well that some people think it’s reading their minds.

Yes, but: The investigation also found that TikTok — like YouTube — can lure users deep into rabbit holes of increasingly extreme content.

Source: How TikTok sees inside your brain – Axios

Google is starting to tell you how it found Search results

Alphabet’s (GOOGL.O) Google will now show its search engine users more information about why it found the results they are shown, the company said on Thursday.

It said people googling queries will now be able to click into details such as how their result matched certain search terms, in order to better decide if the information is relevant.

Google has been making changes to give users more context about the results its search engine provides. Earlier this year it introduced panels to tell users about the sources of the information they are seeing. It has also started warning users when a topic is rapidly evolving and search results might not be reliable.

Source: Google is starting to tell you how it found Search results | Reuters

Southwest Airlines cancels 500 flights after computer glitch grounds fleet – for 2nd time in 24 hours

Southwest Airlines (LUV.N) said on Tuesday it canceled about 500 flights and delayed hundreds of others after it was forced to temporarily halt operations over a computer issue — the second time in 24 hours it had been forced to stop flights.

The Federal Aviation Administration said it had issued a temporary nationwide groundstop at the request of Southwest Airlines to resolve a computer reservation issue. The groundstop lasted about 45 minutes, and ended at 2:30 p.m. EDT (1830 GMT), it said.

Southwest said its operations were returning to normal. The issue was the result of “intermittent performance issues with our network connectivity.”

Southwest delayed nearly 1,300 flights on Tuesday, or 37% of its flights, according to flight tracker FlightAware.

Southwest Airlines earlier reported a separate issue that required a groundstop Monday evening after its “third-party weather data provider experienced intermittent performance issues … preventing transmission of weather information that is required to safely operate our aircraft.”

[…]

Source: Southwest Airlines cancels 500 flights after computer glitch grounds fleet | Reuters

Crypto Miners Overrun Docker Hub’s Autobuild, so they have to close free version

This week, Docker announced some changes to Docker Hub Autobuilds — the primary one of interest being that autobuilds would no longer be available to free tier users — and much of the internet let out a collective groan to the tune of “this is why we can’t have nice things!”

 

So, if you happen to be looking for yet another reason to immediately cringe and discard anyone who comes up to you crowing about the benefits of cryptocurrencies, Docker getting rid of its autobuild feature on Docker Hub can be added to your arsenal.

“As many of you are aware, it has been a difficult period for companies offering free cloud compute,” wrote Shaun Mulligan, principal product manager at Docker in the company’s blog post, citing an article that explores how crypto-mining gangs are running amok on free cloud computing platforms. Mulligan goes on to explain that Docker has “seen a massive growth in the number of bad actors,” noting that it not only costs them money, but also degrades performance for their paying customers.

And so, after seven years of free access to their autobuild feature, wherein even all of you non-paying Docker users could set up continuous integration for your containerized projects, gratis, the end is nigh. Like, really, really nigh, as in next week — June 18.

While Docker offered that they already tried to correct the issue by removing around 10,000 accounts, they say that the miners returned the next week in droves, and so they “made the hard choice to remove Autobuilds.”

[…]

Source: This Week in Programming: Crypto Miners Overrun Docker Hub’s Autobuild – The New Stack

One Fastly customer triggered internet meltdown by changing a setting

A major internet blackout that hit many high-profile websites on Tuesday has been blamed on a software bug.

Fastly, the cloud-computing company responsible for the issues, said the bug had been triggered when one of its customers had changed their settings.

The outage has raised questions about relying on a handful of companies to run the vast infrastructure that underpins the internet.

Fastly apologised and said the problem should have been anticipated.

The outage, which lasted about an hour, hit some popular websites such as Amazon, Reddit, the Guardian and the New York Times.

[…]

But a customer quite legitimately changing their settings had exposed a bug in a software update issued to customers in mid-May, causing “85% of our network to return errors”, it said.

Engineers had worked out the cause of the problem about 40 minutes after websites had gone offline at about 11:00 BST, Fastly said.

“Within 49 minutes, 95% of our network was operating as normal,” it said.

The company has deployed a bug fix across its network and promised a “post mortem of the processes and practices we followed during this incident” and to “figure out why we didn’t detect the bug during our software quality assurance and testing processes”.

Source: One Fastly customer triggered internet meltdown – BBC News

Windows Defender bug fills Windows 10 boot drive with thousands of files

A Windows Defender bug creates thousands of small files that waste gigabytes of storage space on Windows 10 hard drives.

The bug started with Windows Defender antivirus engine 1.1.18100.5 and will cause the C:\ProgramData\Microsoft\Windows Defender\Scans\History\Store folder to be filled up with thousands of files with names that appear to be MD5 hashes.

Windows Defender folder filled with small files 
Windows Defender folder filled with small files 

From a system seen by BleepingComputer, the created files range in size from 600 bytes to a little over 1KB.

File properties of one of these files
File properties of one of these files

While the system we looked at only had approximately 1MB of files, other Windows 10 users report that their systems have been filled up with hundreds of thousands of files, which in one case, used up 30GB of storage space.

On smaller SSD system drives (C:), this can be a considerable amount of storage space to waste on unnecessary files.

According to Deskmodder, who first reported on this issue, the bug has now been fixed in the latest Windows Defender engine, version 1.1.18100.6.

Source: Windows Defender bug fills Windows 10 boot drive with thousands of files

NASA / JPL honours open source devs with a badge on their github if their code made it to Mars

[…]

we have worked with JPL to place a new Mars 2020 Helicopter Mission badge on the GitHub profile of every developer who contributed to the specific versions of any open source projects and libraries used by Ingenuity. You can check out the full list of projects like SciPy, Linux, and F Prime (F’) that were used by the JPL team here.

[…]

We are also using this opportunity to introduce a new Achievements section to the GitHub profile. Right now, Achievements include the Mars 2020 Helicopter Mission badge, the Arctic Code Vault badge, and a badge for sponsoring open source work via GitHub Sponsors. Watch this space!

Read the story behind the new badge and how open source contributors helped Ingenuity take flight on The ReadME Project.

Congratulations to the teams at NASA and JPL, and to the thousands of developers who made today’s first Martian flight possible. We’re all still here on Earth, but your code is now on Mars!

Source: Open source goes to Mars 🚀 – The GitHub Blog

As FOSS is hugely powered by recognition, this looks like an awesome step to recognise individual developers as well as projects.

Winamp continues with WACUP

In October 2018, Winamp relaunched a leaked version of the updated code as version 5.8. As a longtime winamp user, I was excited – I have many mp3’s which are not available on streaming services and also find that when I search for stuff on Spotify they give me the royalty free Filipino girl band cover version instead of the version I’m looking for.

I’ve been fairly happy with the 5.8 version but it did drop support for eg adding ID3 tags automatically and a few other things. Not being a huge user of the music library I don’t know how that went, but I was happy that they had Milkdrop visualiser support.

Today I came upon the following post on Reddit: Winamp visualizer ported in webgl, like back in the days. You can import your own songs in it. and in the comments found a project called WACUP. It turns out that one of the prolific plug in writers, who was also contracted to work in Winamp itself, DrO has been using the 5.666 version to build a huge slew of updates on and it’s still in development.

So, I’m uninstalling 5.8 and going to have a look at WACUP. I’m looking forward to continuing kicking the Llama’s ass!

Source: WACUP, Winamp Lives On
Source: WACUP, Winamp v5.8 beta & the future of things WACUP & Winamp related

WACUP discord server

Towards real-time photorealistic 3D holography with deep neural networks for every device

The ability to present three-dimensional (3D) scenes with continuous depth sensation has a profound impact on virtual and augmented reality, human–computer interaction, education and training.

[…]

The computationally taxing Fresnel diffraction simulation further places an explicit trade-off between image quality and runtime, making dynamic holography impractical4. Here we demonstrate a deep-learning-based CGH pipeline capable of synthesizing a photorealistic colour 3D hologram from a single RGB-depth image in real time. Our convolutional neural network (CNN) is extremely memory efficient (below 620 kilobytes) and runs at 60 hertz for a resolution of 1,920 × 1,080 pixels on a single consumer-grade graphics processing unit. Leveraging low-power on-device artificial intelligence acceleration chips, our CNN also runs interactively on mobile (iPhone 11 Pro at 1.1 hertz) and edge (Google Edge TPU at 2.0 hertz) devices, promising real-time performance in future-generation virtual and augmented-reality mobile headsets.

Source: Towards real-time photorealistic 3D holography with deep neural networks | Nature

What this means is that they can make really nice holograms (3D objects) on your phone for a fraction of the memory costs than other methods, by using lookup tables.

Same Energy: Visual search engine for pictures

This search engine finds other pictures with the same “energy” as the picture you select on the homepage, upload or paste the URL of

 

We believe that image search should be visual, using only a minimum of words. And we believe it should integrate a rich visual understanding, capturing the artistic style and overall mood of an image, not just the objects in it.

We hope Same Energy will help you discover new styles, and perhaps use them as inspiration. Try it with one of these images:

This website is in beta and will be regularly updated in response to your feedback.

[…]

Same Energy’s core search uses deep learning. The most similar published work is CLIP by OpenAI.

The default feeds available on the home page are algorithmically curated: a seed of 5-20 images are selected by hand, then our system builds the feed by scanning millions of images in our index to find good matches for the seed images. You can create feeds in just the same way: save images to create a collection of seed images, then look at the recommended images.

Source: About | Same Energy

Apple Is Reportedly Cracking Down on App Sideloading on M1 Macs

Earlier this week, 9to5Mac spotted some iOS and macOS beta code that suggested Apple would prevent users from being able to sideload unsupported apps onto the new M1 Macs. Today, 9to5Mac reported that it’s now no longer possible to sideload apps that aren’t available in the Mac App Store even if they’re available on iOS.

You can run iOS and iPadOS apps on your M1 Mac, but only if a developer supports it. Per the report, users had been sideloading apps with tools like iMazing from their iPhones or iPad and could use them on their Apple Silicon computers whether or not they were technically supported. Now, when attempting to sideload an app not available in the Mac App Store on an M1 Mac running the macOS 11.2 beta, users will see an error message that the application “cannot be installed because the developer did not intend for it to run on this platform,” according to a screengrab from 9to5Mac.

[…]

Source: Apple Is Reportedly Cracking Down on App Sideloading on M1 Macs

If it’s fun, you can’t have it. Sieg Heil Apfel!

Firefox to block Backspace key from working as “Back” button

Mozilla developers plan to remove support for using the Backspace key as a Back button inside Firefox.The change is currently active in the Firefox Nightly version and is expected to go live in Firefox 86, scheduled to be released next month, in late February 2021.ZDNet RecommendsThe best free video streaming servicesThe best free video streaming servicesIs money tight? Have you binge-watched everything on Netflix that you ever wanted to see? Here are ways to find new-to-you, great movies plus TV shows for free.Read MoreThe removal of the Backspace key as a navigational element didn’t come out of the blue. It was first proposed back in July 2014, in a bug report opened on Mozilla’s bug tracker.At the time, Mozilla engineers argued that many users who press the Backspace key don’t always mean to navigate to the previous page (the equivalent of pressing the Back button).”Pressing backspace does different things depending on where the cursor is. If it’s in a text input field, it deletes the character to the left. If it’s not in a text input field, it’s the same as hitting the back button,” said Blair McBride, a senior software engineer for Mozilla at the time.”Whether to keep this behaviour has been argued For A Very Long Time,” McBride said. “It’s confusing for many people, but we’ve assumed it would break muscle memory for many people.”Back in 2014, McBride asked other Mozilla engineers to gather data and see exactly how many people press this key before taking a decision.Subsequent data showed that the Backspace key is, by far, the most pressed keyboard shortcut inside the Firefox user interface, with 40 million monthly active users pressing the key and triggering a “Back” navigation.To put it in perspective, this was well above the 16 million Firefox users pressing the CTRL+F shortcut to search content inside a page and 15 million Firefox users who pressed the page reload shortcuts (F5 and CTRL+R).

Source: Firefox to block Backspace key from working as “Back” button | ZDNet