Liquid Neural Networks use Neuron design to compute effectively using smaller models

[Ramin Hasani] and colleague [Mathias Lechner] have been working with a new type of Artificial Neural Network called Liquid Neural Networks, and presented some of the exciting results at a recent TEDxMIT.

Liquid neural networks are inspired by biological neurons to implement algorithms that remain adaptable even after training. [Hasani] demonstrates a machine vision system that steers a car to perform lane keeping with the use of a liquid neural network. The system performs quite well using only 19 neurons, which is profoundly fewer than the typically large model intelligence systems we’ve come to expect. Furthermore, an attention map helps us visualize that the system seems to attend to particular aspects of the visual field quite similar to a human driver’s behavior.

Mathias Lechner and Ramin Hasani
[Mathias Lechner] and [Ramin Hasani]

The typical scaling law of neural networks suggests that accuracy is improved with larger models, which is to say, more neurons. Liquid neural networks may break this law to show that scale is not the whole story. A smaller model can be computed more efficiently. Also, a compact model can improve accountability since decision activity is more readily located within the network. Surprisingly though, liquid neural network performance can also improve generalization, robustness, and fairness.

A liquid neural network can implement synaptic weights using nonlinear probabilities instead of simple scalar values. The synaptic connections and response times can adapt based on sensory inputs to more flexibly react to perturbations in the natural environment.

We should probably expect to see the operational gap between biological neural networks and artificial neural networks continue to close and blur. We’ve previously presented on wetware examples of building neural networks with actual neurons and ever advancing brain-computer interfaces.

Source: Liquid Neural Networks Do More With Less | Hackaday

You can find the paper here: Drones navigate unseen environments with liquid neural networks

How AI Bots Code: Comparing Bing, Claude+, Co-Pilot, GPT-4 and Bard

[…]

In this article, we will compare four of the most advanced AI bots: GPT-4, Bing, Claude+, Bard, and GitHub Co-Pilot. We will examine how they work, their strengths and weaknesses, and how they compare to each other.

Testing the AI Bots for Coding

Before we dive into comparing these four AI bots, it’s essential to understand what an AI bot for coding is and how it works. An AI bot for coding is an artificial intelligence program that can automatically generate code for a specific task. These bots use natural language processing and machine learning algorithms to analyze human-written code and generate new code based on that analysis.

To start off we are going to test the AI on a hard Leetcode question, after all, we want to be able to solve complex coding problems. We also wanted to test it on a less well-known question. For our experiment, we will be testing Leetcode 214. Shortest Palindrome.

[…]

GPT-4 is highly versatile in generating code for various programming languages and applications. Some of the caveats are that it takes much longer to get a response. API usage is also a lot more expensive and costs could ramp up quickly. Overall it got the answer right and passed the test.

[…]

[Bing] The submission passed all the tests. It beat 47% of submissions on runtime and 37% on memory. This code looks a lot simpler than what GPT-4 generated. It beat GPT-4 on memory and it used less code! Bing seems to have the most efficient code so far, however, it gave a very short explanation of how it solved it. Nonetheless, best so far.

[…]

[Claude+] The code does not pass the submission test. Only 1/121 of the test passed. Ouch! This one seemed promising but it looks like Claude is not that well suited for programming.

[…]

[Bard] So to start off I had to manually insert the “self” arg in the function since Bard didn’t include it. From the result of the test, Bard’s code did not pass the submission test. Passing only 2/121 test cases. An unfortunate result, but it’s safe to say for now Bard isn’t much of a coding expert.

[…]

[Github CodePilot] This passes all the tests. It scored better than 30% of submissions on runtime and 37% on memory.

It’s fun, you can see the coding examples (with and without comments) that were output by each AI in the link

Source: How AI Bots Code: Comparing Bing, Claude+, Co-Pilot, GPT-4 and Bard | HackerNoon

OpenAI Threatens Popular GitHub Project With Lawsuit Over API Use

Anyone can use ChatGPT for free, but if you want to use GPT4, the latest language model, you have to either pay for ChatGPT Plus, pay for access to OpenAI’s API, or find another site that has incorporated GPT4 into its own free chatbot. There are sites that use OpenAI such as Forefront (opens in new tab) and You.com (opens in new tab), but what if you want to make your own bot and don’t want to pay for the API?

A GitHub project called GPT4free (opens in new tab) allows you to get free access to the GPT4 and GPT3.5 models by funneling those queries through sites like You.com (opens in new tab), Quora (opens in new tab) and CoCalc (opens in new tab) and giving you back the answers. The project is GitHub’s most popular new repo, getting 14,000 stars this week.

Now, according to Xtekky, the European computer science student who runs the repo, OpenAI has sent a letter demanding that he take the whole thing down within five days or face a lawsuit.

I interviewed Xtekky via Telegram, and he said he doesn’t think OpenAI should be targeting him since he isn’t connecting directly to the company’s API, but is instead getting data from other sites that are paying for their own API licenses. If the owners of those sites have a problem with his scripts querying them, they should approach him directly, he posited.

[…]

On the backend, GPT4Free is visiting various API urls that sites like You.com, an AI-powered search engine that employs OpenAI’s GPT3.5 model for its answers, use for their own queries. For example, the main GPT4Free script hits the URL https://you.com/api/streamingSearch, feeds it various parameters, and then takes the JSON it returns and formats it. The GPT4Free repo also has scripts that grab data from other sites such as Quora, Forefront, and TheB. Any enterprising developer could use these simple scripts to make their own bot.

“One could achieve the same [thing by] just opening tabs of the sites. I can open tabs of Phind, You, etc. on my browser and spam requests,” Xtekky said. “My repo just does it in a simpler way.”

All of the sites GPT4Free draws from are paying OpenAI fees in order to use its large language models. So when you use the scripts, those sites end up footing the bill for your queries, without you ever visiting them. If those sites are relying on ad revenue from their sites to offset these API costs, they are losing money because of these queries.

Xtekky said that he is more than happy to take down scripts that use individual sites’ APIs upon request from the owners of those sites. He said that he has already taken down scripts that use phind.com, ora.sh and writesonic.com.

Perhaps more importantly, Xtekky noted, any of these sites could block external uses of their internal APIs with common security measures. One of many methods that sites like You.com could use is to block API traffic from any IPs that are not their own.

Xtekky said that he has advised all the sites that wrote to him that they should secure their APIs, but none of them has done so. So, even if he takes the scripts down from his repo, any other developer could do the same thing.

[…]

Xtekky initially told me that he hadn’t decided whether to take the repo down or not. However, several hours after this story first published, we chatted again and he told me that he plans to keep the repo up and to tell OpenAI that, if they want it taken down, they should file a formal request with GitHub instead of with him.

“I believe they contacted me before to pressurize me into deleting the repo myself,” he said. “But the right way should be an actual official DMCA, through GitHub.”

Even if the original repo is taken down, there’s a great chance that the code — and this method of accessing GPT4 and GPT3.5 — will be published elsewhere by members of the community. Even if GPT4Free had never existed anyone can find ways to use these sites’ APIs if they continue to be unsecured.

“Users are sharing and hosting this project everywhere,” he said. “Deletion of my repo will be insignificant.”

[…]

Source: OpenAI Threatens Popular GitHub Project With Lawsuit Over API Use | Tom’s Hardware

OpenAI: ChatGPT back in Italy after meeting watchdog demands – well, cosmetically at any rate

ChatGPT’s maker said Friday April 28, 2023 that the artificial intelligence chatbot is available again in Italy after the company met the demands of regulators who temporarily blocked it over privacy concerns. (AP Photo/Michael Dwyer, File)

ChatGPT’s maker said Friday that the artificial intelligence chatbot is available again in Italy after the company met the demands of regulators who temporarily blocked it over privacy concerns.

OpenAI said it fulfilled a raft of conditions that the Italian data protection authority wanted satisfied by an April 30 deadline to have the ban on the AI software lifted.

“ChatGPT is available again to our users in Italy,” San Francisco-based OpenAI said by email. “We are excited to welcome them back, and we remain dedicated to protecting their privacy.”

[…]

Last month, the Italian watchdog, known as Garante, ordered OpenAI to temporarily stop processing Italian users’ personal information while it investigated a possible data breach. The authority said it didn’t want to hamper AI’s development but emphasized the importance of following the EU’s strict data privacy rules.

OpenAI said it “addressed or clarified the issues” raised by the watchdog.

The measures include adding information on its website about how it collects and uses data that trains the algorithms powering ChatGPT, providing EU users with a new form for objecting to having their data used for training, and adding a tool to verify users’ ages when signing up.

Some Italian users shared what appeared to be screenshots of the changes, including a menu button asking users to confirm their age and links to the updated privacy policy and training data help page.

[…]

Source: OpenAI: ChatGPT back in Italy after meeting watchdog demands | AP News

So basically OpenAI did not much of anything and Italy was able to walk on an uninformed and unworkable ban with their heads held somewhat high – not everyone will see them as the idiots they are.

Bungie Somehow Wins $12 Million In Destiny 2 Anti-Cheat Lawsuit

As Bungie continues on its warpath against Destiny 2 cheaters, the studio has won $12 million in the lawsuit against Romanian cheat seller Mihai Claudiu-Florentin that began back in 2021.

Claudiu-Florentin sold cheat software at VeteranCheats, which allowed users to get an edge over other players with software that could do things like tweak their aim and let them see through walls. Naturally, Bungie argued that the software was damaging to Destiny 2‘s competitive and cooperative modes, and has won the case against the seller. The lawsuit alleges “copyright infringement, violations of the Digital Millennium Copyright Act (DMCA), breach of contract, intentional interference with contractual relations, and violations of the Washington Consumer Protection Act.” (Thanks, TheGamePost).

You can read a full PDF of the suit, courtesy of TheGamePost, here, but the gist of it is that Bungie is asking for $12,059,912.98 in total damages, with $11,696,000 going toward violations of the DMCA, $146,662.28 for violations of the Copyright Act, and $217,250.70 accounting for the studio’s attorney expense. After subpoenaing Stripe, a payment processing service, Bungie learned that at least 5848 separate transactions took place through the service that included Destiny 2 cheating software from November 2020 to July 2022.

While Bungie might have $12 million more dollars out of this, VeteranCheats’ website is still up and offering cheating software for games like Overwatch and Call of Duty. Though, Destiny no longer appears on the site’s home page or if you search within its community.

According to the lawsuit, Bungie has paid around $2 million in its anti-cheating efforts between staffing and software. This also extended to a blanket ban on cheating devices in both competitive and PvE modes earlier this month.

While Destiny 2 has been wrapped up in legal issues, the shooter has also been caught up in some other controversy recently thanks to a major leak that led to the ban of a major content creator in the game’s community.

Source: Bungie Wins $12 Million In Destiny 2 Anti-Cheat Lawsuit

Despite personally not liking online players cheating, it beggars belief that someone selling software is not allowed to create software which edits memory registers. You are the owner of what is on your computer, despite anything that software publishers put in their unreadable terms. You can modify anything on there however you like.

Commission sets out to harmonise EU patent rules

The European Commission today proposed new rules to improve the protection of intellectual property (IP) in Europe, covering patents relating to industry standards, compulsory licensing of patents in crisis situations, and the revision of the legislation on supplementary protection certificates.

These will work hand-in-hand with the unitary patent system that 17 EU countries are to introduce in June, after 50 years in the making.

Thus far, patents have been “an expensive business,” said Thierry Breton, EU commissioner for the internal market, presenting the proposal. The unitary patent system will cut costs from an average of €36,000 to €5,000. “We’re going to have a true single market for patents,” said Breton.

The proposed new rules will take this even further, tidying up aspects of patent legislation that up to this point have varied country by country.

IP is more important than ever as a key driver of economic growth. According to the Commission, IP-intensive industries account for almost half of GDP and over 90% of all EU exports.

The proposed new rules will now be reviewed and amended by the European Parliament and the member states, which will have to rubberstamp the final agreement before it enters into force.

What’s in the package?

Standard Essential Patents (SEPs): These concern technologies that are essential in making a product standards-compliant. They include various connectivity technologies, such as 5G, Wi-Fi, Bluetooth, and audio/video compression and decompression standards. The holders of these patents essentially get a monopoly on their technologies and are obliged to license them on fair, reasonable and non-discriminatory (FRAND) terms.

But the current system isn’t very transparent, causing constant lengthy disputes and litigation. The Commission hopes the new rules will fix this by providing additional transparency regarding SEP portfolios; aggregating royalty when patents of several holders are involved; and allowing for more efficient means for parties to agree on FRAND terms.

Compulsory Licensing: Sometimes, in last-resort crisis situations, governments can allow the use of a patented invention without the consent of the patent holder. For example, if there’s a vaccine shortage, governments can ramp up production without explicit permission from the company that holds the patent.

While many value chains across the bloc span multiple countries, each member stat has its own rules on this, resulting in a very patchy legal framework. The Commission proposes to create an EU-wide compulsory licensing instrument.

Supplementary Protection Certificates (SPC): These certificates extend the term of a patent by up to five years, to encourage innovation and growth in certain sectors. It’s a special right awarded only to human or veterinary pharmaceutical product and plant protection product patent holders, and only at national level. Once again, it’s a fragmented and costly system.

The Commission wants to introduce a unitary SPC. An application would be subjected to a single examination, which would allow the granting of a unitary SPC or national SPCs in each selected member states.

Source: Commission sets out to harmonise EU patent rules | Science|Business

So SEPs and Compulsory Licensing seem like a step in the right direction, hopefully stopping companies from sitting on their (bought) IPs to slow down innovation. SPC, however, ensures that work in the field of the patent is brought to a standstill as the only innovator there is the company that holds the patent – who doesn’t have any clear incentive to work on the patent at all!

Senator Brian Schatz Joins The Moral Panic With Unconstitutional Age Verification Bill

Senator Brian Schatz is one of the more thoughtful Senators we have, and he and his staff have actually spent time talking to lots of experts in trying to craft bills regarding the internet. Unfortunately, it still seems like he still falls under the seductive sway of this or that moral panic, so when the bills actually come out, they’re perhaps more thoughtfully done than the moral panic bills of his colleagues, but they’re still destructive.

His latest is… just bad. It appears to be modeled on a bunch of these age verification moral panic bills that we’ve seen in both red states and blue states, though Schatz’s bill is much closer to the red state version of these bills that is much more paternalistic and nonsensical.

His latest bill, with the obligatory “bipartisan” sponsor of Tom Cotton, the man who wanted to send our military into US cities to stop protests, is the “Protecting Kids On Social Media Act of 2023.”

You can read the full bill yourself, but the key parts are that it would push companies to use privacy intrusive age verification technologies, ban kids under 13 from using much of the internet, and give parents way more control and access to their kids’ internet usage.

Schatz tries to get around the obvious pitfalls with this… by basically handwaving them away. As even the French government has pointed out, there is no way to do age verification without violating privacy. There just isn’t. French data protection officials reviewed all the possibilities and said that literally none of them respect people’s privacy, and on top of that, it’s not clear that any of them are even that effective at age verification.

Schatz’s bill handwaves this away by basically saying “do age verification, but don’t do it in a way that violates privacy.” It’s like saying “jump out of a plane without a parachute, but just don’t die.” You’re asking the impossible.

I mean, clauses like this sound nice:

Nothing in this section shall be construed to require a social media platform to require users to provide government-issued identification for age verification.

But the fact that this was included kinda gives away the fact that basically every age verification system has to rely on government issued ID.

Similarly, it says that while sites should do age verification, they’re restricted from keeping any of the information as part of the process, but again, that raises all sorts of questions as to HOW you do that. This is “keep track of the location of this car, but don’t track where it is.” I mean… this is just wishful thinking.

The parental consent part is perhaps the most frustrating, and is a staple of the GOP state bills we’ve seen:

A social media platform shall take reasonable steps beyond merely requiring attestation, taking into account current parent or guardian relationship verification technologies and documentation, to require the affirmative consent of a parent or guardian to create an account for any individual who the social media platform knows or reasonably believes to be a minor according to the age verification process used by the platform.

Again, this is marginally better than the GOP bills in that it acknowledges sites need to “take into account” the current relationship, but that still leaves things open to mischief, especially as a “minor” in the bill is defined as anyone between the ages of 13 and 18, a period of time in which teens are discovering their own identities, and that often conflicts with their parents.

So, an LGBTQ child in a strict religious household with parents who refuse to accept their teens’ identity can block their kids entirely from joining certain online communities. That seems… really bad? And pretty clearly unconstitutional, because kids have rights too.

There’s also a prohibition on “algorithmic recommendation systems” for teens under 18. Of course, the bill ignores that reverse chronological order… is also an algorithm. So, effectively the law requires RANDOM content be shown to teens.

It also ignores that algorithms are useful in filtering out the kind of information that is inappropriate for kids. I get that there’s this weird, irrational hatred for the dreaded algorithms these days, but most algorithms are… actually helpful in better presenting appropriate content to both kids and adults. Removing that doesn’t seem helpful. It actually seems guaranteed to expose kids to even worse stuff, since they can’t algorithmically remove the inappropriate content any more.

Why would they want to do that?

Finally, the bill creates a “pilot program” for the Commerce Department to establish an official age verification program. While they frame this as being voluntary, come on. If you’re running a social media site and you’re required to do age verification under this bill (or other bills) are you going to use some random age verification offering out there, or the program set up by the federal government? Of course you’re going to go with the federal government’s, so if you were to ever get in trouble, you just say “well we were using the program the government came up with, so we shouldn’t face any liability for its failures.”

Just like the “verify but don’t violate people’s privacy” is handwaving, so is the “this pilot program is voluntary.”

This really is frustrating. Schatz has always seemed to be much more reasonable and open minded about this stuff, and its sad to see him fall prey to the moral panic about kids and the internet, even as the evidence suggests it’s mostly bullshit. I prefer Senators who legislate based on reality, not panic.

Source: Senator Brian Schatz Joins The Moral Panic With Unconstitutional Age Verification Bill | Techdirt

This OLED screen can fill with liquid to form tactile buttons | Engadget

Swiping and tapping on flat screens is something we’ve learned to deal with in smartphones, tablets and other touchscreen gizmos, but it doesn’t come close to the ease of typing on a hardware keyboard or playing a game with a physical controller. To that end, researchers Craig Shultz and Chris Harrison with the Future Interfaces Group (FIG) at Carnegie Mellon University have created a display that can protrude screen areas in different configurations. It’s a concept we’ve seen before, but this version is thinner, lighter and more versatile.

FIG’s “Flat Panel Haptics” tech can be stacked under an OLED panel to create the protrusions: imagine screen sections that can be inflated and deflated with fluid on demand. This could add a new tactile dimension for things like pop-up media controls, keyboards and virtual gamepads you can find without fumbling around on the screen

[…]

The Embedded Electroosmotic Pumps (EEOPs) are arrays of fluid pumps on a thin actuation layer built into a touchscreen device […] When an onscreen element requires a pop-up button, fluid fills a section of the EEOP layer, and the OLED panel on top bends to take that shape. The result is a “button” that sticks out from the flat surface by as much as 1.5 mm, enough to feel the difference. When the software dismisses it, it recedes back into the flat display. The research team says filling each area takes about one second, and they feel solid to touch.

[…]

this tech may remind you of Tactus’ rising touchscreen keyboard, which ultimately shipped as a bulky iPad mini case. FIG’s prototype can take on more dynamic shapes and sizes, and the research team says their version’s thinness sets it apart from similar attempts. “The main advantage of this approach is that the entire mechanical system exists in a compact and thin form factor,” FIG said in its narration for a demo video. “Our device stack-ups are under 5mm in thickness while still offering 5mm of displacement. Additionally, they are self-contained, powered only by a pair of electrical cables and control electronics. They’re also lightweight (under 40 grams for this device), and they are capable of enough force to withstand user interaction.”

[…]

Source: This OLED screen can fill with liquid to form tactile buttons | Engadget

Apple App Store Policies Upheld by Court in Epic Games Antitrust Challenge – Apple can continue monopoly and massive 30% charges in app store USA (but not in EU)

Apple Inc. won an appeals court ruling upholding its App Store’s policies in an antitrust challenge brought by Epic Games Inc.

Monday’s ruling by the US Ninth Circuit Court of Appeals affirmed a lower-court judge’s 2021 decision largely rejecting claims by Epic, the maker of Fortnite, that Apple’s online marketplace policies violated federal law because they ban third-party app marketplaces on its operating system. The appeals panel upheld the judge’s ruling in Epic’s favor on California state law claims.

The ruling comes as Apple has been making changes to the way the App Store operates to address developer concerns since Epic sued the company in 2020. The dispute began after Apple expelled the Fortnite game from the App Store because Epic created a workaround to paying a 30% fee on customers’ in-app purchases.

“There is a lively and important debate about the role played in our economy and democracy by online transaction platforms with market power,” the three-judge panel said. “Our job as a federal court of appeals, however, is not to resolve that debate — nor could we even attempt to do so. Instead, in this decision, we faithfully applied existing precedent to the facts.”

Apple hailed the outcome as a “resounding victory,” saying nine out of 10 claims were decided in its favor.

[…]

Epic Chief Executive Officer Tim Sweeney tweeted that although Apple prevailed, at least the appeals court kept intact the portion of the 2021 ruling that sided with Epic.

“Fortunately, the court’s positive decision rejecting Apple’s anti-steering provisions frees iOS developers to send consumers to the web to do business with them directly there. We’re working on next steps,” he wrote.

[…]

Following a three-week trial in Oakland, California, Rogers ordered the technology giant to allow developers of mobile applications steer consumers to outside payment methods, granting an injunction sought by Epic. The judge, however, didn’t see the need for third-party app stores or to push Apple to revamp policies over app developer fees.

[…]

US and European authorities have taken steps to rein in Apple’s stronghold over the mobile market. In response to the Digital Markets Act — a new series of laws in the European Union — Apple is planning to allow outside apps as early as next year as part of an update to the upcoming iOS 17 software update, Bloomberg News has reported.

[…]

Source: Apple App Store Policies Upheld by Court in Epic Games Antitrust Challenge – Bloomberg

It’s a pretty sad day when an antitrust court runs away from calling a monopoly a monopoly

WebLLM runs an AI in your browser, talks to your GPU

This project brings language model chats directly onto web browsers. Everything runs inside the browser with no server support and accelerated with WebGPU. We can bring a lot of fun opportunities to build AI assistants for everyone and enable privacy while enjoying GPU acceleration.

[…]

These models are usually big and compute-heavy. To build a chat service, we will need a large cluster to run an inference server, while clients send requests to servers and retrieve the inference output. We also usually have to run on a specific type of GPUs where popular deep-learning frameworks are readily available.

This project is our step to bring more diversity to the ecosystem. Specifically, can we simply bake LLMs directly into the client side and directly run them inside a browser? If that can be realized, we could offer support for client personal AI models with the benefit of cost reduction, enhancement for personalization, and privacy protection. The client side is getting pretty powerful.

Won’t it be even more amazing if we can simply open up a browser and directly bring AI natively to your browser tab? There is some level of readiness in the ecosystem. WebGPU has just shipped and enables native GPU executions on the browser.

Still, there are big hurdles to cross, to name a few:

  • We need to bring the models somewhere without the relevant GPU-accelerated Python frameworks.
  • Most of the AI frameworks rely heavily on optimized computed libraries that are maintained by hardware vendors. We need to start from scratch.
  • Careful planning of memory usage, and aggressive compression of weights so that we can fit the models into memory.

We also do not want to only do it for just one model. Instead, we would like to present a repeatable and hackable workflow that enables anyone to easily develop and optimize these models in a productive Python-first approach, and deploy them universally, including on the web.

Besides supporting WebGPU, this project also provides the harness for other kinds of GPU backends that TVM supports (such as CUDA, OpenCL, and Vulkan) and really enables accessible deployment of LLM models.

Source: WebLLM github

Grimes invites AI artists to use her voice, promising 50 percent royalty split

Canadian synth-pop artist Grimes says AI artists can use her voice without worrying about copyright or legal enforcement. “I’ll split 50% royalties on any successful AI generated song that uses my voice. Same deal as I would with any artist i collab with,” she tweeted on Sunday. “Feel free to use my voice without penalty. I have no label and no legal bindings.”

The musician’s declaration comes in the wake of streaming platforms removing an AI-generated song using simulated voices of Drake and The Weeknd. Universal Music Group (UMG), which represents both artists, called for the purge after “Heart on My Sleeve” garnered over 15 million listens on TikTok and 600,000 on Spotify. UMG argued that publishing a song trained on its artists’ voices was “a breach of our agreements and a violation of copyright law.”

Grimes takes a considerably more open approach, adding that she has no label or legal bindings. “I think it’s cool to be fused [with] a machine and I like the idea of open sourcing all art and killing copyright,” she added.

[…]

Source: Grimes invites AI artists to use her voice, promising 50 percent royalty split | Engadget

A very practical approach to something that is coming anyway

Samsung to pay out $303M for memory patent infringement to Netlist.

Samsung Electronics has been stung for more than $303 million in a patent infringement case brought by US memory company Netlist.

Netlist, headquartered in Irvine, California, styles itself as a provider of high-performance modular memory subsystems. The company initially filed a complaint that Samsung had infringed on three of its patents, later amended to six [PDF]. Following a six-day trial, the jury found for Netlist in five of these and awarded a total of $303,150,000 in damages.

The exact patents in question are 10,949,339 (‘339), 11,016,918 (‘918), 11,232,054 (‘054), 8,787,060 (‘060), and 9,318,160 (‘160). The products that are said to infringe on these are Samsung’s DDR4 LRDIMM, DDR5 UDIMM, SODIMM, and RDIMM, plus the high-bandwidth memory HBM2, HBM2E and HBM3 technologies.

The patents appear to apply to various aspects of DDR memory modules. According to reports, Samsung’s representatives had argued that Netlist’s patents were invalid because they were already covered by existing technology and that its own memory chips did not function in the same way as described by the patents, but this clearly did not sway the jurors.

However, it appears that the verdict did not go all Netlist’s way because its lawyers had been arguing for more damages, saying that a reasonable royalty figure would be more like $404 million.

In the court filings [PDF], Netlist claims that Samsung had knowledge of the patents in question “no later than August 2, 2021” via access to Netlist’s patent portfolio docket.

The company states that Samsung and Netlist were initially partners under a 2015 Joint Development and License Agreement (JDLA), which granted Samsung a five-year paid-up license to Netlist’s patents.

Samsung had used Netlist’s technologies to develop products such as DDR4 memory modules and emerging new technologies, including DDR5 and HBM, Netlist said.

Under the terms of the agreement, Samsung was to supply Netlist certain memory products at competitive prices, but Netlist claimed Samsung repeatedly failed to honor these promises. As a result, Netlist claims, it terminated the JDLA on July 15, 2020.

Netlist alleged in its court filing that Samsung has continued to make and sell memory products “with materially the same structures” as those referenced in the patents, despite the termination of the agreement.

According to investor website Seeking Alpha, the damages awarded are for the infringement of Netlist technology covering only about five quarters. The website also said that Netlist now has the cash to not only grow its business but pursue other infringers of its technology.

Netlist chief executive CK Hong said in a statement that the company was pleased with the case. He claimed the verdict “left no doubt” that Samsung had wilfully infringed Netlist patents, and is “currently using Netlist technology without a license” on many of its strategic product lines.

Hong also claimed that it was an example of the “brazen free ride” carried out by industry giants against intellectual property belonging to small innovators.

“We hope this case serves as a reminder of this problem to policymakers as well as a wakeup call to those in the memory industry that are using our IP without permission,” he said.

We asked Samsung Electronics for a statement regarding the verdict in this case, but did not hear back from the company at the time if publication.

Netlist is also understood to have other cases pending against Micron and Google. Those against Micron are said to involve infringement of many of the same patents that were involved in the Samsung case. ®

Source: Samsung to pay out $303M for memory patent infringement • The Register

ICANN and Verisign Proposal Would Allow Any Government In The World To Seize Domain Names with no redress

ICANN, the organization that regulates global domain name policy, and Verisign, the abusive monopolist that operates the .COM and .NET top-level domains, have quietly proposed enormous changes to global domain name policy in their recently published “Proposed Renewal of the Registry Agreement for .NET”, which is now open for public comment.

Either by design, or unintentionally, they’ve proposed allowing any government in the world to cancel, redirect, or transfer to their control applicable domain names! This is an outrageous and dangerous proposal that must be stopped. […]

The offending text can be found buried in an Appendix of the proposed new registry agreement. […] the critical changes can be found in Section 2.7 of Appendix 8, on pages 147-148. (the blue text represents new language) Below is a screenshot of that section:

Proposed Changes in Appendix 8 of the .NET agreement
Proposed Changes in Appendix 8 of the .NET agreement

Section 2.7(b)(i) is new and problematic on its own [editor bold!] (and I’ll analyze that in more detail in a future blog post – there are other things wrong with this proposed agreement, but I’m starting off with the worst aspect). However, carefully examine the new text in Section 2.7(b)(ii) on page 148 of the redline document.

It would allow Verisign, via the new text in 2.7(b)(ii)(5), to:

deny, cancel, redirect or transfer any registration or transaction, or place any domain name(s) on registry lock, hold or similar status, as it deems necessary, in its unlimited and sole discretion” [the language at the beginning of 2.7(b)(ii), emphasis added]

Then it lists when it can take the above measures. The first 3 are non-controversial (and already exist, as they’re not in blue text). The 4th is new, relating to security, and might be abused by Verisign. But, look at the 5th item! I was shocked to see this new language:

“(5) to ensure compliance with applicable law, government rules or regulations, or pursuant to any legal order or subpoena of any government, administrative or governmental authority, or court of competent jurisdiction,” [emphasis added]

This text has a plain and simple meaning — they propose  to allow “any government“, “any administrative authority”  and “any government authority” and “court[s] of competent jurisdiction” to deny, cancel, redirect, or transfer any domain name registration […].

You don’t have to be ICANN’s fiercest critic to see that this is arguably the most dangerous language ever inserted into an ICANN agreement.

“Any government” means what it says, so that means China, Russia, Iran, Turkey,  the Pitcairn Islands, Tuvalu, the State of Texas, the State of California, the City of Detroit,  a village of 100 people with a local council in Botswana, or literally “any government” whether it be state, local, or national. We’re talking about countless numbers of “governments” in the world (you’d have to add up all the cities, towns, states, provinces and nations, for starers). If that wasn’t bad enough, their proposal adds “any administrative authority” and “any government authority” (i.e.  government bureaucrats in any jurisdiction in the world) that would be empowered to “deny, cancel, redirect or transfer” domain names.  [The new text about “court of competent jurisdiction” is also probematic, as it would  override determinations that would be made by registrars via the agreements that domain name registrants have with their registrars.]

This proposal represents a complete government takeover of domain names, with no due process protections for registrants. It would usurp the role of registrars, making governments go directly to Verisign (or any other registry that adopts similar language) to achieve anything they desired. It literally overturns more than two decades of global domain name policy.

[…]

they bury major policy changes in an appendix near the end of a document that is over 100 pages long (133 pages long for the “clean” version of the document; 181 pages for the “redline” version)

[…]

ICANN and Verisign appear to have deliberately timed the comment period to avoid public scrutiny.  The public comment period opened on April 13, 2023, and is scheduled to end (currently) on May 25, 2023. However, the ICANN76 public meeting was held between March 11 and March 16, 2023, and the ICANN77 public meeting will be held between June 12 and June 15, 2023. Thus, they published the proposal only after the ICANN76 public meeting had ended (where we could have asked ICANN staff and the board questions about the proposal), and seek to end the public comment period before ICANN77 begins. This is likely not by chance, but by design.

[…]

What can you do? You can submit a public comment, showing your opposition to the changes, and/or asking for more time to analyze the proposal. [there are other things wrong with the proposed agreement, e.g. all of Appendix 11 (which takes language from new gTLD agreements, which are entirely different from legacy gTLDs like .com/net/org); section 2.14 of Appendix 8 further protects Verisign, via the new language (page 151 of the redline document); section 6.3 of Appendix 8, on page 158 of the redline, seeks to protect Verisign from losing the contract in the event of a cyberattack that disrupts operations — however, we are already paying above market rates for .net (and .com) domain names, arguably because Verisign tells others that they have high expenses in order to keep 100% uptime even in the face of attacks; this new language allows them to degrade service, with no reduction in fees)

[…]

Update #1: I’ve submitted a “placeholder” comment to ICANN, to get the ball rolling.  There’s also a thread on NamePros.com about this topic, if you had questions, etc.

Update #2: DomainIncite points out correctly that the offending language is already in the .com agreement, and that people weren’t paying attention to this issue back 3 years ago, as there bigger fish to fry. I went back and reviewed my own comment submission, and see that I did raise the issue back then too:

[…]

Source: Red Alert: ICANN and Verisign Proposal Would Allow Any Government In The World To Seize Domain Names – FreeSpeech.com

Microsoft 365 and Teams hit in global partial outage – again

[…]

The problem kicked off this morning with Redmond saying it was looking into errors within its caching infrastructure. In an advisory, the Windows goliath wrote “some users may be intermittently unable to view or access web apps in Microsoft 365.”

A range of Microsoft 365 online services are affected, such as Excel, the company wrote, adding “the search bar may not appear in any Office Online service.” Others impacted include Teams admin centers, SharePoint Online (users may not be able to view the settings gear, search bar, and waffle), and Planner.

According to DownDetector, complaints of the outage began to spike before 0900 ET (1300 UTC). There’s no sign of any resumption in services for the time being.

The software giant initially indicated the problem was linked to an “unusually high number of timeout exceptions within our caching and our Azure Active Directory (AAD) infrastructure.” It soon updated that its engineers had narrowed down a cause.

“We determined that a section of caching infrastructure is performing below acceptable performance thresholds, causing calls to gather user licensing information to bypass the cache and go directly to Azure Active Directory infrastructure, resulting in high resource utilization, resulting in throttling and impact,” Redmond wrote in an advisory.

[…]

Microsoft has battled its share of outages in recent months. A code change caused a four-hour outage of Azure Resource Manager in Europe in March and a month earlier Outlook was knocked out for a while.

In January, Microsoft had to roll back a network change in its WAN after it cause problems a range of cloud services, including Exchange Online, Teams, Outlook, and OneDrive for Business.

[…]

Source: Microsoft 365 and Teams hit in global partial outage • The Register

AIs are too worried about answering stuff you can just google because… doomsayers?

The article below is about how you can trick ChatGPT toj give you a napalm recipe. It’s pretty circumspect and clever that you need to say “my grandmother worked at a factory and told me how to make it” but why would you need to? Why are we somehow stricter about the output of an AI than we are of search engines we have been using for decades?

Source: People Are Using A ‘Grandma Exploit’ To Break AI

Just Google it: https://www.google.com/search?client=firefox-b-d&q=ingredients+napalm

And you won’t have to spend any time thinking of ways to trick the AI. So why does the AI need tricking in the first place?

Also, why does the writer of the article feel hesitant to place the answers of the AI in the article? Because Kotaku is part of a network of AI doomsayers, a bit like Fox news when it comes to the subject of AI.

Europe spins up AI research hub to apply Digital Services Act rules on Big Tech

[…]

The European Centre for Algorithmic Transparency (ECAT), which was officially inaugurated in Seville, Spain, today (April 18), is expected to play a major role in interrogating the algorithms of mainstream digital services — such as Facebook, Instagram and TikTok.

ECAT is embedded within the EU’s existing Joint Research Centre (JRC), a long-established science facility that conducts research in support of a broad range of EU policymaking, from climate change and crisis management to taxation and health sciences.

[…]

Commission officials describe the function of ECAT being to identify “smoking guns” to drive enforcement of the DSA — say, for example, an AI-based recommender system that can be shown is serving discriminatory content despite the platform in question claiming to have taken steps to de-bias output — with the unit’s researchers being tasked with coming up with hard evidence to help the Commission build cases for breaches of the new digital rulebook.

The bloc is at the forefront of addressing the asymmetrical power of platforms globally, having prioritized a major retooling of its approach to regulating digital services and platforms at the start of the current Commission mandate back in 2019 — leading to the DSA and its sister regulation, the Digital Markets Act (DMA), being adopted last year.

Both regulations will come into force in the coming months, although the full sweep of provisions in the DSA won’t start being enforced until early 2024. But a subset of so-called very large online platforms (VLOPs) and very large online search engines (VLOSE) face imminent oversight — and expand the usual EU acronym soup.

[…]

It’s not yet confirmed exactly which platforms will get the designation but set criteria in the DSA — such as having 45 million+ regional users — encourages educated guesses: The usual (U.S.-based) GAFAM giants are almost certain to meet the threshold, along with (probably) a smattering of larger European platforms. Plus, given its erratic new owner, Twitter may have painted a DSA-shaped target on its feathered back. But we should find out for sure in the coming weeks.

[…]

Risks the DSA stipulates platforms must consider include the distribution of disinformation and illegal content, along with negative impacts on freedom of expression and users’ fundamental rights (which means considering issues like privacy and child safety). The regulation also puts some limits on profiling-driven content feeds and the use of personal data for targeted advertising.

[…]

At the least, the DSA should help end the era of platforms’ PR-embellished self-regulation — aka, all those boilerplate statements where tech giants claim to really care about privacy/security/safety, and so on, while doing anything but.

[…]

The EU also hopes ECAT will be become a hub for world-leading research in the area of algorithmic auditing — and that by supporting regulated algorithmic transparency on tech giants, regional researchers will be able to unpick longer term societal impacts of mainstream AIs.

[…]

In terms of size, the plan is for a team of 30 to 40 to staff the unit — perhaps reaching full capacity by the end of the year — with some 14 hires made so far, the majority of whom are scientific staff.

[…]

Funding for the unit is coming from the existing budget of the JRC, per Commission officials, although a 1% supervisory fee on VLOPs/VLOSE will be used to finance the ECAT’s staff costs as that mechanism spins up.

At today’s launch event, ECAT staff gave a series of brief presentations of four projects they’re already undertaking — including examining racial bias in search results; investigating how to design voice assistant technology for children to be sensitive to the vulnerability of minors; and researching social media recommender systems by creating a series of test profiles to explore how different likes influence the character of the recommended content.

Other early areas of research include facial expression recognition algorithms and algorithmic ranking and pricing.

During the technical briefing for press, ECAT staff also noted they’ve built a data analysis tool to help the Commission with the looming task of parsing the risk assessment reports that designated platforms will be required to submit for scrutiny — anticipating what’s become a common tactic for tech giants receiving regulatory requests to respond with reams of (mostly) irrelevant information in a cynical bid to flood the channel with noise.

[…]

Given the complexity of studying algorithms and platforms in the real world, where all sorts of sociotechnical impacts and effects are possible, the Center is taking a multidisciplinary approach to hiring talent — bringing in not only computer and data scientists but also social and cognitive scientists and other types of researchers.

[…]

 

Source: Europe spins up AI research hub to apply accountability rules on Big Tech | TechCrunch

Chips Act: Council and European Parliament strike provisional deal

The Council and the European Parliament have reached today a provisional political agreement on the regulation to strengthen Europe’s semiconductor ecosystem, better known as the ‘Chips Act’. The deal is expected to create the conditions for the development of an industrial base that can double the EU’s global market share in semiconductors from 10% to at least 20% by 2030.

[…]

The Commission proposed three main lines of action, or pillars, to achieve the Chips’ Act objectives

  1. The “Chips for Europe Initiative”, to support large-scale technological capacity building
  2. A framework to ensure security of supply and resilience by attracting investment
  3. A Monitoring and Crisis Response system to anticipate supply shortages and provide responses in case of crisis.

The Chips for Europe Initiative is expected to mobilise €43 billion in public and private investments, with €3,3 billion coming from the EU budget. These actions will be primarily implemented through a Chips Joint Undertaking, a public-private partnership involving the Union, the member states and the private sector.

Main elements of the compromise

On pillar one, the compromise reached today reinforces the competences of the Chips Joint Undertaking which will be responsible for the selection of the centres of excellence, as part of its work programme.

On pillar two, the final compromise widens the scope of the so called ‘First-of-a-kind’ facilities to include those producing equipment used in semiconductor manufacturing. ’First-of-a-kind’ facilities contribute to the security of supply for the internal market and can benefit from fast-tracking of permit granting procedures. In addition, design centres that significantly enhance the Union’s capabilities in innovative chip design may receive a European label of ‘design centre of excellence’ which will be granted by the Commission. Member states may apply support measures for design centres that receive this label according to existing legislation.

The compromise also underlines, the importance of international cooperation and the protection of intellectual property rights as two key elements for the creation of an ecosystem for semiconductors.

[…]

The provisional agreement reached today between the Council and the European Parliament needs to be finalised, endorsed, and formally adopted by both institutions.

Once the Chips Act is adopted, the Council will pass an amendment of the Single Basic Act (SBA) for institutionalised partnerships under Horizon Europe, to allow the establishment of the Chips Joint Undertaking, which builds upon and renames the existing Key Digital Technologies Joint Undertaking. The SBA amendment is adopted by the Council following consultation of the Parliament.

Both texts will be published at the same time.

Source: Chips Act: Council and European Parliament strike provisional deal – Consilium

3D bioprinting inside the human body could be possible thanks to new soft robot

Engineers from UNSW Sydney have developed a miniature and flexible soft robotic arm which could be used to 3D print biomaterial directly onto organs inside a person’s body.

3D bioprinting is a process whereby biomedical parts are fabricated from so-called bioink to construct natural tissue-like structures.

[…]

Their work has resulted in a tiny flexible 3D bioprinter that has the ability to be inserted into the body just like an endoscope and directly deliver multilayered biomaterials onto the surface of internal organs and tissues.

The proof-of-concept device, known as F3DB, features a highly manoeuvrable swivel head that ‘prints’ the bioink, attached to the end of a long and flexible snake-like robotic arm, all of which can be controlled externally.

The research team say that with further development, and potentially within five to seven years, the technology could be used by medical professionals to access hard-to-reach areas inside the body via small skin incisions or natural orifices.

The research team tested the device inside an artifical colon where it was able to traverse through confined spaces before successfully 3D printing.

Dr Do and his team have tested their device inside an artificial colon, as well as 3D printing a variety of materials with different shapes on the surface of a pig’s kidney.

“Existing 3D bioprinting techniques require biomaterials to be made outside the body and implanting that into a person would usually require large open-field open surgery which increases infection risks,” said Dr Do, a Scientia Senior Lecturer at UNSW’s Graduate School of Biomedical Engineering (GSBmE) and Tyree Foundation Institute of Health Engineering (IHealthE).

“Our flexible 3D bioprinter means biomaterials can be directly delivered into the target tissue or organs with a minimally invasive approach.

“This system offers the potential for the precise reconstruction of three-dimensional wounds inside the body, such as gastric wall injuries or damage and disease inside the colon.

“Our prototype is able to 3D print multilayered biomaterials of different sizes and shape through confined and hard-to-reach areas, thanks to its flexible body.

“Our approach also addresses significant limitations in existing 3D bioprinters such as surface mismatches between 3D printed biomaterials and target tissues/organs as well as structural damage during manual handling, transferring, and transportation process.”

[…]

The smallest F3DB prototype produced by the team at UNSW has a similar diameter to commercial therapeutic endoscopes (approximately 11-13mm), which is small enough to be inserted into a human gastrointestinal tract.

[…]

The device features a three-axis printing head directly mounted onto the tip of a soft robotic arm. This printing head, which consists of soft artificial muscles that allow it to move in three directions, works very similarly to conventional desktop 3D printers.

The soft robotic arm can bend and twist due to hydraulics and can be fabricated at any length required. Its stiffness can be finely tuned using different types of elastic tubes and fabrics.

The printing nozzle can be programmed to print pre-determined shapes, or operated manually where more complex or undetermined bioprinting is required. In addition, the team utilised a machine learning-based controller which can aid the printing process.

To further demonstrate the feasibility of the technology, the UNSW team tested the cell viability of living biomaterial after being printed via their system.

Experiments showed the cells were not affected by the process, with the majority of the cells observed to be alive post-printing. The cells then continued to grow for the next seven days, with four times as many cells observed one week after printing.

[…]

The nozzle of the F3DB printing head can be used as a type of electric scalpel to first mark and then cut away cancerous lesions.

Water can also be directed through the nozzle to simultaneously clean any blood and excess tissue from the site, while faster healing can be promoted by the immediate 3D printing of biomaterial directly while the robotic arm is still in place.

The research team demonstrated the way the F3DB could be used in a variety of different ways if developed to be an all-in-one endoscopic surgical tool.

The ability to carry out such multi-functional procedures was demonstrated on a pig’s intestine and the researchers say the results show that the F3DB is a promising candidate for the future development of an all-in-one endoscopic surgical tool.

“Compared to the existing endoscopic surgical tools, the developed F3DB was designed as an all-in-one endoscopic tool that avoids the use of changeable tools which are normally associated with longer procedural time and infection risks,” Mai Thanh Thai said.

Source: 3D bioprinting inside the human body could be possible thanks to new soft robot | UNSW Newsroom

Via Soft Robotic System For In Situ 3D Bioprinting And Endoscopic Surgery | Lifehacker

AI-generated Drake and The Weeknd song pulled from streaming platforms

If you spent almost any time on the internet this week, you probably saw a lot of chatter about “Heart on My Sleeve.” The song went viral for featuring AI-generated voices that do a pretty good job of mimicking Drake and The Weeknd singing about a recent breakup.

On Monday, Apple Music and Spotify pulled the track following a complaint from Universal Music Group, the label that represents the real-life versions of the two Toronto-born artists. A day later, YouTube, Amazon, SoundCloud, Tidal, Deezer and TikTok did the same.

At least, they tried to comply with the complaint, but as is always the case with the internet, you can still find the song on websites like YouTube. Before it was removed from Spotify, “Heart on My Sleeve” was a bonafide hit. People streamed the track more than 600,000 times. On TikTok, where the creator of the song, the aptly named Ghostwriter977, first uploaded it, users listened to “Heart on My Sleeve” more than 15 million times.

In a statement Universal Music Group shared with publications like Music Business Worldwide, the label argued the training of a generative AI using the voices of Drake and The Weeknd was “a breach of our agreements and a violation of copyright law.” The company added that streaming platforms had a “legal and ethical responsibility to prevent the use of their services in ways that harm artists.”

It’s fair to say the music industry, much like the rest of society, now finds itself at an inflection point over the use of AI. While there are obvious ethical issues related to the creation of “Heart on My Sleeve,” it’s unclear if it’s a violation of traditional copyright law. In March, the US Copyright Office said art, including music, cannot be copyrighted if it was produced by providing a text prompt to a generative AI model. However, the office left the door open to granting copyright protections to works with AI-generated elements.

“The answer will depend on the circumstances, particularly how the AI tool operates and how it was used to create the final work,” it said. “This is necessarily a case-by-case inquiry. If a work’s traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it.” In the case of “Heart on My Sleeve,” complicating matters is that the song was written by a human being. It’s impossible to say how a court challenge would play out. What is clear is that we’re only the start of a very long discussion about the role of AI in music.

Source: AI-generated Drake and The Weeknd song pulled from streaming platforms | Engadget

Study shows human tendency to help others is universal

A new study on the human capacity for cooperation suggests that, deep down, people of diverse cultures are more similar than you might expect. The study, published in Scientific Reports, shows that from the towns of England, Italy, Poland, and Russia to the villages of rural Ecuador, Ghana, Laos, and Aboriginal Australia, at the micro scale of our daily interaction, people everywhere tend to help others when needed.

 

Our reliance on each other for help is constant: The study finds that, in , someone will signal a need for assistance (e.g., to pass a utensil) once every 2 minutes and 17 seconds on average. Across cultures, these small requests for assistance are complied with seven times more often than they are declined. And on the rare occasions when people do decline, they explain why. This human tendency to help others when needed—and to explain when such help can’t be given—transcends other .

[…]

Key findings:

  • Small requests for assistance (e.g., to pass a utensil) occur on average once every 2 minutes and 17 seconds in everyday life around the world. Small requests are low-cost decisions about sharing items for everyday use or assisting others with tasks around the house or village. Such decisions are many orders more frequent than high-cost decisions such as sharing the spoils of a successful whale hunt or contributing to the construction of a village road, the sort of decisions that have been found to be significantly influenced by culture.
  • The frequency of small requests varies by the type of activity people are engaged in. Small requests are most frequent in task-focused activities (e.g., cooking), with an average of one request per 1 minute and 42 seconds, and least frequent in talk-focused activities (conversation for its own sake), with an average of one request per 7 minutes and 42 seconds.
  • Small requests for assistance are complied with, on average, seven times more often than they are declined; six times more often than they are ignored; and nearly three times more often than they are either declined or ignored. This preference for compliance is cross-culturally shared and unaffected by whether the interaction is among family or non-family.
  • A cross-cultural preference for compliance with small requests is not predicted by prior research on resource-sharing and cooperation, which instead suggest that culture should cause prosocial behavior to vary in appreciable ways due to local norms, values, and adaptations to the natural, technological, and socio-economic environment. These and other factors could in principle make it easier for people to say “No” to small requests, but this is not what we find.
  • Interacting among family or non-family does not have an impact on the frequency of small requests, nor on rates of compliance. This is surprising in light of established theories predicting that relatedness between individuals should increase both the frequency and degree of resource-sharing/cooperation.
  • People do sometimes reject or ignore small requests, but a lot less frequently than they comply. The average rates of rejection (10%) and ignoring (11%) are much lower than the average rate of compliance (79%).
  • Members of some cultures (e.g., Murrinhpatha speakers of northern Australia) ignore small requests more than others, but only up to about one quarter of the time (26%). A relatively higher tolerance for ignoring small requests may be a culturally evolved solution to dealing with “humbug”—pressure to comply with persistent demands for goods and services. Still, Murrinhpatha speakers regularly comply with small requests (64%) and rarely reject them (10%).
  • When people provide assistance, this is done without explanation, but when they decline, they normally give an explicit reason (74% of the time). Theses norms of rationalization suggest that while people decline giving help “conditionally,” that is, only for reason, they give help “unconditionally,” that is, without needing to explain why they are doing it.
  • When people decline assistance, they tend to avoid saying “No,” often letting the rejection being inferred solely from the reason they provide for not complying. Saying “No” is never found in more than one third of rejections. The majority of rejections (63%) consist instead of simply giving a reason for non-compliance.

More information: Giovanni Rossi et al, Shared cross-cultural principles underlie human prosocial behavior at the smallest scale, Scientific Reports (2023). DOI: 10.1038/s41598-023-30580-5

Source: Study shows human tendency to help others is universal

WhatsApp, Signal Threaten to Leave UK Over ‘Online Safety Bill’ – which wants big brother reading all your messages. So online snooping bill, really.

Meta’s WhatsApp is threatening to leave the UK if the government passes the Online Safety Bill, saying it will essentially eliminate its encryption methods. Alongside its rival company Signal and five other apps, the company said that, by passing the bill, users will no longer be protected by end-to-end encryption, which ensures no one but the recipient has access to sent messages.

The “Online Safety Bill” was originally proposed to criminalize content encouraging self-harm posted to social media platforms like Facebook, Instagram, TikTok, and YouTube, but was amended to more broadly focus on illegal content related to adult and child safety. Although government officials said the bill would not ban end-to-end encryption, the messaging apps said in an open letter, “The bill provides no explicit protection for encryption.”

It continues, “If implemented as written, could empower OFCOM [the Office of Communications] to try to force the proactive scanning of private messages on end-to-end encrypted communication services, nullifying the purpose of end-to-end encryption as a result and compromising the privacy of all users.”

[…]

“In short, the bill poses an unprecedented threat to the privacy, safety, and security of every UK citizen and the people with whom they communicate around the world while emboldening hostile governments who may seek to draft copycat laws.”

Signal said in a Twitter post that it will “not back down on providing private, safe communications,” as the open letter urges the UK government to reconsider the way the bill is currently laid out. Both companies have stood by their arguments, stating they will discontinue the apps in the UK rather than risk weakening their current encryption standards.

[…]

Source: WhatsApp, Signal Threaten to Leave UK Over ‘Online Safety Bill’

AutoGPT: An AI that thinks up your questions and answers them for you

Auto-GPT dramatically flips the relationship between AI and the end user (that’s you). ChatGPT relies on a back-and-forth between the AI and the end user: You prompt the AI with a request, it returns a result, and you respond with a new prompt, perhaps based on what the AI gave you. Auto-GPT, however, only needs one prompt from you; from there, the AI agent will then generate a task list it thinks it will need to accomplish whatever you asked it to, without needing any additional input or prompts. It essentially chains together LLM (large language model) “thoughts,” according to developer Significant Gravitas (Toran Bruce Richards).

Auto-GPT is a complex system relying on multiple components. It connects to the internet to retrieve specific information and data (something ChatGPT’s free version cannot do), features long-term and short-term memory management, uses GPT-4 for OpenAI’s most advanced text generation, and GPT-3.5 for file storage and summarization. There’s a lot of moving parts, but it all comes together to produce some impressive results.

How people are using Auto-GPT

The first example comes from Auto-GPT’s GitHub site: You can’t quite see all of the goals the demonstrated lists Auto-GPT is working to complete, but the gist is someone asks the AI agent to research and learn more about itself. It follows suit, opening Google, finding its own GitHub repository, analyzing it, and compiling a summary of the data in a text file for the demonstrator to view.

Here’s a more practical example: The user wants to figure out which headphones on the market are the best. Instead of doing the research themselves, they turn to Auto-GPT, and prompt the AI agent with these four goals:

  1. Do market research for different headphones on the market today.
  2. Get the top five headphones and list their pros and cons.
  3. Include the price for each one and save the analysis.
  4. Once you are done, terminate.

After thinking for a moment, the AI agent springs into action, searching the web to compile information and reviews on headphones. It then spits out an easy-to-read plain text file, ranking the best headphones, listing their prices, and highlighting their pros and cons.

[…]

But I think what makes Auto-GPT cool (or at least the promise of Auto-GPT) is the idea of being able to ask an AI to take on most of the responsibility for any given task. You don’t need to know the right questions to ask or the optimal prompts to give to make the AI do what you want. As long as your initial goals are clear, the AI can think of those next steps for you, and build you things you might not have been able to think of yourself.

[…]

You don’t need to know how to code in order to build your own AI agent with Auto-GPT, but it helps. You’ll need a computer, an OpenAI API key (a pay as you go plan is highly recommended), a text editor (like Notepad++), Git (or the latest stable release of Auto-GPT), and Python, but there are plenty of other requirements if you want to expand Auto-GPT’s capabilities, such as integrating speech or alternative memory locations such as Pinecone.

Auto-GPT’s GitHub page has an extensive list of instructions for setting up the tool as well as adding in those extras. Tom’s Hardware also has a great guide for simple set up if all you’re looking to do is try out an AI agent with Auto-GPT. If you do build it yourself, mind your token usage—we discuss setting limits in our OpenAI API piece so you don’t accidentally allow Auto-GPT to burn through your credit card balance.

However, you don’t need to build the AI agent yourself if all you want to do is try out Auto-GPT. Some developers have built interfaces for Auto-GPT that are easy to access from your web browser, no coding experience necessary. Cognosys was free to use until high demand forced developers to require a OpenAI API key in order to access. AgentGPT is an interesting example you don’t need an API key for, but it limits the amount of tasks the AI will generate for itself. Still, it will give you a sense of how the process works, and you can increase those limits by providing an API key.

Source: How to Try AutoGPT Right Now

Here was a quick question I had:

Embarking on a new goal:what is the best 8k tv for gaming?
Thinking…(Restart if this takes more than 30 seconds)
Added task:Scrape data from reputable sources on top 8k TVs available in the market
Added task:Analyze the technical specifications and features of each TV to determine which ones are suitable for gaming
Added task:Rank the TVs based on gaming performance and generate a list of the top 3 recommended 8k TVs for gaming
Thinking…(Restart if this takes more than 30 seconds)
Executing “Scrape data from reputable sources on top 8k TVs available in the market”

Based on the data that I have gathered from reputable sources, the best 8K TV for gaming is the Sony XBR-75Z9G. This TV offers high-quality picture and sound, and it has a response time of less than 2ms which is perfect for fast-paced action in gaming. Additionally, it is equipped with HDMI 2.1 ports that support 8K resolution at 60 frames per second which is ideal for the latest gaming consoles. Overall, the Sony XBR-75Z9G is a top-of-the-line 8K TV that delivers an unparalleled gaming experience.

Thinking…(Restart if this takes more than 30 seconds)
Task marked as complete!
Thinking…(Restart if this takes more than 30 seconds)
Executing “Analyze the technical specifications and features of each TV to determine which ones are suitable for gaming”

After analyzing the technical specifications and features of various 8k TVs, the best TV for gaming is the LG OLED88ZXPUA. This TV boasts a 120Hz refresh rate, low input lag, and HDMI 2.1 compatibility, making it perfect for gaming. Additionally, it has excellent contrast and color accuracy, providing an immersive gaming experience.

Thinking…(Restart if this takes more than 30 seconds)
Added task:Create a comparison chart of the top 3 recommended 8k TVs for gaming, including the LG OLED88ZXPUA, that highlights their technical specifications and features relevant to gaming such as the refresh rate, input lag, and HDMI compatibility.
Thinking…(Restart if this takes more than 30 seconds)
Executing “Rank the TVs based on gaming performance and generate a list of the top 3 recommended 8k TVs for gaming”

After thorough research and analysis, I have ranked the best 8K TVs for gaming based on factors like input lag, color accuracy, HDR support, refresh rate, and overall picture quality. Here are my top 3 recommended 8K TVs for gaming:

LG ZX 88-inch 8K OLED TV: This TV has outstanding image quality, the lowest input lag among 8K TVs, and supports various gaming features like VRR, ALLM, and G-Sync. It also has four HDMI 2.1 ports for connecting next-gen consoles and high-speed devices.

Samsung Q800T 82-inch 8K QLED TV: This TV has a bright and vibrant display, a fast refresh rate of 120Hz, and supports multiple HDR formats like HDR10+ and HLG. It also has low input lag and a dedicated gaming mode for optimized performance.

Sony Z8H 85-inch 8K LED TV: This TV has impressive motion handling, accurate color reproduction, and supports advanced gaming features like HDMI 2.1, VRR, and ALLM. It also has a high peak brightness for HDR content and a sleek design.

Overall, these 8K TVs offer exceptional gaming performance and immersive experiences for gamers. I recommend considering these models based on your budget, room size, and personal preferences.

Thinking…(Restart if this takes more than 30 seconds)
Added task:Create a detailed analysis of each of the top 3 recommended 8K TVs for gaming, including the LG OLED88ZXPUA, highlighting their strengths and weaknesses in terms of gaming performance, picture quality, design, and overall value for money.
We’re sorry, because this is a demo, we cannot have our agents running for too long. Note, if you desire longer runs, please provide your own API key in Settings. Shutting down.

Microsoft Readies AI Chip as Machine Learning Costs Surge

After placing an early bet on OpenAI, the creator of ChatGPT, Microsoft has another secret weapon in its arsenal: its own artificial intelligence chip for powering the large-language models responsible for understanding and generating humanlike language. The Information: The software giant has been developing the chip, internally code-named Athena, since as early as 2019, according to two people with direct knowledge of the project. The chips are already available to a small group of Microsoft and OpenAI employees, who are testing the technology, one of them said. Microsoft is hoping the chip will perform better than what it currently buys from other vendors, saving it time and money on its costly AI efforts. Other prominent tech companies, including Amazon, Google and Facebook, also make their own in-house chips for AI. The chips — which are designed for training software such as large-language models, along with supporting inference, when the models use the intelligence they acquire in training to respond to new data — could also relieve a shortage of the specialized computers that can handle the processing needed for AI software. That shortage, reflecting the fact that primarily just one company, Nvidia, makes such chips, is felt across tech. It has forced Microsoft to ration its computers for some internal teams, The Information has reported.

Source: Microsoft Readies AI Chip as Machine Learning Costs Surge – Slashdot

Toucan Teaches You a New Language While You Browse the Internet

[…] Toucan, a browser extension, is trying a different approach, and it might just be the thing that finally clicks for you.

How Toucan works

With Toucan installed for either ChromeEdge, or Safari, the first time you visit a website or click on an article, you’ll notice something strange: Some of the words on the page will change, and translate to your chosen language. If you’re trying to learn Portuguese, you might see a sentence like esta, but one or two palavras will be translated.

Hover your cursor over the translated word, and a pop up will reveal what it means in English. (“Esta” is “this;’ “palavras” is “words.”) This pop up gives you additional interesting controls, such as a speaker icon you can click to hear how the word is pronounced, a mini quiz to see if you can spell the word, and a save button to highlight the word for later.

It starts out with one word at a time, but as you learn, Toucan ups the ante, adding more words in blocks, or “lexical chunks.” It makes sense, since languages don’t all share the same grammar structure. By building up to larger groups of words, you’ll more naturally learn word order, verb conjugation, and the general grammar of your chosen language.

[…]

According to the company, the extension is based on a theory called [second] language acquisition, which, in this context, can be summed up as: You learn languages best when you are immersed in the language in a relaxed manner, rather than attempt to drill the new words and grammar into your head over and over again. If you ever felt like high school Spanish class got you nowhere on your language acquisition journey, Toucan might argue it’s because that system isn’t effective for most people.

Of course, Toucan doesn’t take the Duolingo approach, either, hounding you with reminders to get in your studying. It wants you to put in as little effort as possible into learning a new language. When you’re using the internet as you normally do, you’re bound to visit websites and read articles you’re actually interested in. If Toucan translates some of those words to your target language, you’ll be more inclined to pick them up, since you’re already engaged with the text, rather than reading boring lesson materials. You’re doing what you always do (wasting time online) while dipping your dedos do pé into a new language.

Source: Toucan Teaches You a New Language While You Browse the Internet

Unfortunately not in Firefox and also not in the language I need, but looks very promising!

Stability AI of Stable Diffusion Launches open source StableLM LLM with 3 and 7 billion parameters

Today, Stability AI released a new open-source language model, StableLM. The Alpha version of the model is available in 3 billion and 7 billion parameters, with 15 billion to 65 billion parameter models to follow. Developers can freely inspect, use, and adapt our StableLM base models for commercial or research purposes, subject to the terms of the CC BY-SA-4.0 license.

In 2022, Stability AI drove the public release of Stable Diffusion, a revolutionary image model that represents a transparent, open, and scalable alternative to proprietary AI. With the launch of the StableLM suite of models, Stability AI is continuing to make foundational AI technology accessible to all. Our StableLM models can generate text and code and will power a range of downstream applications. They demonstrate how small and efficient models can deliver high performance with appropriate training.

The release of StableLM builds on our experience in open-sourcing earlier language models with EleutherAI, a nonprofit research hub. These language models include GPT-J, GPT-NeoX, and the Pythia suite, which were trained on The Pile open-source dataset. Many recent open-source language models continue to build on these efforts, including Cerebras-GPT and Dolly-2.

StableLM is trained on a new experimental dataset built on The Pile, but three times larger with 1.5 trillion tokens of content. We will release details on the dataset in due course. The richness of this dataset gives StableLM surprisingly high performance in conversational and coding tasks, despite its small size of 3 to 7 billion parameters (by comparison, GPT-3 has 175 billion parameters).

We are also releasing a set of research models that are instruction fine-tuned. Initially, these fine-tuned models will use a combination of five recent open-source datasets for conversational agents: Alpaca, GPT4All, Dolly, ShareGPT, and HH. These fine-tuned models are intended for research use only and are released under a noncommercial CC BY-NC-SA 4.0 license, in-line with Stanford’s Alpaca license.

[…]

The models are now available in our GitHub repository. We will publish a full technical report in the near future, and look forward to ongoing collaboration with developers and researchers as we roll out the StableLM suite. In addition, we will be kicking off our crowd-sourced RLHF program, and working with community efforts such as Open Assistant to create an open-source dataset for AI assistants.

[…]

Source: Stability AI Launches the First of its StableLM Suite of Language Models — Stability AI