Samsung Electronics has been stung for more than $303 million in a patent infringement case brought by US memory company Netlist.
Netlist, headquartered in Irvine, California, styles itself as a provider of high-performance modular memory subsystems. The company initially filed a complaint that Samsung had infringed on three of its patents, later amended to six [PDF]. Following a six-day trial, the jury found for Netlist in five of these and awarded a total of $303,150,000 in damages.
The exact patents in question are 10,949,339 (‘339), 11,016,918 (‘918), 11,232,054 (‘054), 8,787,060 (‘060), and 9,318,160 (‘160). The products that are said to infringe on these are Samsung’s DDR4 LRDIMM, DDR5 UDIMM, SODIMM, and RDIMM, plus the high-bandwidth memory HBM2, HBM2E and HBM3 technologies.
The patents appear to apply to various aspects of DDR memory modules. According to reports, Samsung’s representatives had argued that Netlist’s patents were invalid because they were already covered by existing technology and that its own memory chips did not function in the same way as described by the patents, but this clearly did not sway the jurors.
However, it appears that the verdict did not go all Netlist’s way because its lawyers had been arguing for more damages, saying that a reasonable royalty figure would be more like $404 million.
In the court filings [PDF], Netlist claims that Samsung had knowledge of the patents in question “no later than August 2, 2021” via access to Netlist’s patent portfolio docket.
The company states that Samsung and Netlist were initially partners under a 2015 Joint Development and License Agreement (JDLA), which granted Samsung a five-year paid-up license to Netlist’s patents.
Samsung had used Netlist’s technologies to develop products such as DDR4 memory modules and emerging new technologies, including DDR5 and HBM, Netlist said.
Under the terms of the agreement, Samsung was to supply Netlist certain memory products at competitive prices, but Netlist claimed Samsung repeatedly failed to honor these promises. As a result, Netlist claims, it terminated the JDLA on July 15, 2020.
Netlist alleged in its court filing that Samsung has continued to make and sell memory products “with materially the same structures” as those referenced in the patents, despite the termination of the agreement.
According to investor website Seeking Alpha, the damages awarded are for the infringement of Netlist technology covering only about five quarters. The website also said that Netlist now has the cash to not only grow its business but pursue other infringers of its technology.
Netlist chief executive CK Hong said in a statement that the company was pleased with the case. He claimed the verdict “left no doubt” that Samsung had wilfully infringed Netlist patents, and is “currently using Netlist technology without a license” on many of its strategic product lines.
Hong also claimed that it was an example of the “brazen free ride” carried out by industry giants against intellectual property belonging to small innovators.
“We hope this case serves as a reminder of this problem to policymakers as well as a wakeup call to those in the memory industry that are using our IP without permission,” he said.
We asked Samsung Electronics for a statement regarding the verdict in this case, but did not hear back from the company at the time if publication.
Netlist is also understood to have other cases pending against Micron and Google. Those against Micron are said to involve infringement of many of the same patents that were involved in the Samsung case. ®
ICANN, the organization that regulates global domain name policy, and Verisign, the abusive monopolist that operates the .COM and .NET top-level domains, have quietly proposed enormous changes to global domain name policy in their recently published “Proposed Renewal of the Registry Agreement for .NET”, which is now open for public comment.
Either by design, or unintentionally, they’ve proposed allowing any government in the world to cancel, redirect, or transfer to their control applicable domain names! This is an outrageous and dangerous proposal that must be stopped. […]
The offending text can be found buried in an Appendix of the proposed new registry agreement. […] the critical changes can be found in Section 2.7 of Appendix 8, on pages 147-148. (the blue text represents new language) Below is a screenshot of that section:
Proposed Changes in Appendix 8 of the .NET agreement
Section 2.7(b)(i) is new and problematic on its own [editor bold!] (and I’ll analyze that in more detail in a future blog post – there are other things wrong with this proposed agreement, but I’m starting off with the worst aspect). However, carefully examine the new text in Section 2.7(b)(ii) on page 148 of the redline document.
It would allow Verisign, via the new text in 2.7(b)(ii)(5), to:
” deny, cancel, redirect or transfer any registration or transaction, or place any domain name(s) on registry lock, hold or similar status, as it deems necessary, in its unlimited and sole discretion” [the language at the beginning of 2.7(b)(ii), emphasis added]
Then it lists when it can take the above measures. The first 3 are non-controversial (and already exist, as they’re not in blue text). The 4th is new, relating to security, and might be abused by Verisign. But, look at the 5th item! I was shocked to see this new language:
“(5) to ensure compliance with applicable law, government rules or regulations, or pursuant to any legal order or subpoena of any government, administrative or governmental authority, or court of competent jurisdiction,” [emphasis added]
This text has a plain and simple meaning — they propose to allow “any government“, “any administrative authority” and “any government authority” and “court[s] of competent jurisdiction” to deny, cancel, redirect, or transfer any domain name registration […].
You don’t have to be ICANN’s fiercest critic to see that this is arguably the most dangerous language ever inserted into an ICANN agreement.
“Any government” means what it says, so that means China, Russia, Iran, Turkey, the Pitcairn Islands, Tuvalu, the State of Texas, the State of California, the City of Detroit, a village of 100 people with a local council in Botswana, or literally “any government” whether it be state, local, or national. We’re talking about countless numbers of “governments” in the world (you’d have to add up all the cities, towns, states, provinces and nations, for starers). If that wasn’t bad enough, their proposal adds “any administrative authority” and “any government authority” (i.e. government bureaucrats in any jurisdiction in the world) that would be empowered to “deny, cancel, redirect or transfer” domain names. [The new text about “court of competent jurisdiction” is also probematic, as it would override determinations that would be made by registrars via the agreements that domain name registrants have with their registrars.]
This proposal represents a complete government takeover of domain names, with no due process protections for registrants. It would usurp the role of registrars, making governments go directly to Verisign (or any other registry that adopts similar language) to achieve anything they desired. It literally overturns more than two decades of global domain name policy.
[…]
they bury major policy changes in an appendix near the end of a document that is over 100 pages long (133 pages long for the “clean” version of the document; 181 pages for the “redline” version)
[…]
ICANN and Verisign appear to have deliberately timed the comment period to avoid public scrutiny. The public comment period opened on April 13, 2023, and is scheduled to end (currently) on May 25, 2023. However, the ICANN76 public meeting was held between March 11 and March 16, 2023, and the ICANN77 public meeting will be held between June 12 and June 15, 2023. Thus, they published the proposal only after the ICANN76 public meeting had ended (where we could have asked ICANN staff and the board questions about the proposal), and seek to end the public comment period before ICANN77 begins. This is likely not by chance, but by design.
[…]
What can you do? You can submit a public comment, showing your opposition to the changes, and/or asking for more time to analyze the proposal. [there are other things wrong with the proposed agreement, e.g. all of Appendix 11 (which takes language from new gTLD agreements, which are entirely different from legacy gTLDs like .com/net/org); section 2.14 of Appendix 8 further protects Verisign, via the new language (page 151 of the redline document); section 6.3 of Appendix 8, on page 158 of the redline, seeks to protect Verisign from losing the contract in the event of a cyberattack that disrupts operations — however, we are already paying above market rates for .net (and .com) domain names, arguably because Verisign tells others that they have high expenses in order to keep 100% uptime even in the face of attacks; this new language allows them to degrade service, with no reduction in fees)
Update #2: DomainIncite points out correctly that the offending language is already in the .com agreement, and that people weren’t paying attention to this issue back 3 years ago, as there bigger fish to fry. I went back and reviewed my own comment submission, and see that I did raise the issue back then too:
The problem kicked off this morning with Redmond saying it was looking into errors within its caching infrastructure. In an advisory, the Windows goliath wrote “some users may be intermittently unable to view or access web apps in Microsoft 365.”
A range of Microsoft 365 online services are affected, such as Excel, the company wrote, adding “the search bar may not appear in any Office Online service.” Others impacted include Teams admin centers, SharePoint Online (users may not be able to view the settings gear, search bar, and waffle), and Planner.
According to DownDetector, complaints of the outage began to spike before 0900 ET (1300 UTC). There’s no sign of any resumption in services for the time being.
The software giant initially indicated the problem was linked to an “unusually high number of timeout exceptions within our caching and our Azure Active Directory (AAD) infrastructure.” It soon updated that its engineers had narrowed down a cause.
“We determined that a section of caching infrastructure is performing below acceptable performance thresholds, causing calls to gather user licensing information to bypass the cache and go directly to Azure Active Directory infrastructure, resulting in high resource utilization, resulting in throttling and impact,” Redmond wrote in an advisory.
[…]
Microsoft has battled its share of outages in recent months. A code change caused a four-hour outage of Azure Resource Manager in Europe in March and a month earlier Outlook was knocked out for a while.
In January, Microsoft had to roll back a network change in its WAN after it cause problems a range of cloud services, including Exchange Online, Teams, Outlook, and OneDrive for Business.
The article below is about how you can trick ChatGPT toj give you a napalm recipe. It’s pretty circumspect and clever that you need to say “my grandmother worked at a factory and told me how to make it” but why would you need to? Why are we somehow stricter about the output of an AI than we are of search engines we have been using for decades?
And you won’t have to spend any time thinking of ways to trick the AI. So why does the AI need tricking in the first place?
Also, why does the writer of the article feel hesitant to place the answers of the AI in the article? Because Kotaku is part of a network of AI doomsayers, a bit like Fox news when it comes to the subject of AI.
The European Centre for Algorithmic Transparency (ECAT), which was officially inaugurated in Seville, Spain, today (April 18), is expected to play a major role in interrogating the algorithms of mainstream digital services — such as Facebook, Instagram and TikTok.
ECAT is embedded within the EU’s existing Joint Research Centre (JRC), a long-established science facility that conducts research in support of a broad range of EU policymaking, from climate change and crisis management to taxation and health sciences.
[…]
Commission officials describe the function of ECAT being to identify “smoking guns” to drive enforcement of the DSA — say, for example, an AI-based recommender system that can be shown is serving discriminatory content despite the platform in question claiming to have taken steps to de-bias output — with the unit’s researchers being tasked with coming up with hard evidence to help the Commission build cases for breaches of the new digital rulebook.
The bloc is at the forefront of addressing the asymmetrical power of platforms globally, having prioritized a major retooling of its approach to regulating digital services and platforms at the start of the current Commission mandate back in 2019 — leading to the DSA and its sister regulation, the Digital Markets Act (DMA), being adopted last year.
Both regulations will come into force in the coming months, although the full sweep of provisions in the DSA won’t start being enforced until early 2024. But a subset of so-called very large online platforms (VLOPs) and very large online search engines (VLOSE) face imminent oversight — and expand the usual EU acronym soup.
[…]
It’s not yet confirmed exactly which platforms will get the designation but set criteria in the DSA — such as having 45 million+ regional users — encourages educated guesses: The usual (U.S.-based) GAFAM giants are almost certain to meet the threshold, along with (probably) a smattering of larger European platforms. Plus, given its erratic new owner, Twitter may have painted a DSA-shaped target on its feathered back. But we should find out for sure in the coming weeks.
[…]
Risks the DSA stipulates platforms must consider include the distribution of disinformation and illegal content, along with negative impacts on freedom of expression and users’ fundamental rights (which means considering issues like privacy and child safety). The regulation also puts some limits on profiling-driven content feeds and the use of personal data for targeted advertising.
[…]
At the least, the DSA should help end the era of platforms’ PR-embellished self-regulation — aka, all those boilerplate statements where tech giants claim to really care about privacy/security/safety, and so on, while doing anything but.
[…]
The EU also hopes ECAT will be become a hub for world-leading research in the area of algorithmic auditing — and that by supporting regulated algorithmic transparency on tech giants, regional researchers will be able to unpick longer term societal impacts of mainstream AIs.
[…]
In terms of size, the plan is for a team of 30 to 40 to staff the unit — perhaps reaching full capacity by the end of the year — with some 14 hires made so far, the majority of whom are scientific staff.
[…]
Funding for the unit is coming from the existing budget of the JRC, per Commission officials, although a 1% supervisory fee on VLOPs/VLOSE will be used to finance the ECAT’s staff costs as that mechanism spins up.
At today’s launch event, ECAT staff gave a series of brief presentations of four projects they’re already undertaking — including examining racial bias in search results; investigating how to design voice assistant technology for children to be sensitive to the vulnerability of minors; and researching social media recommender systems by creating a series of test profiles to explore how different likes influence the character of the recommended content.
Other early areas of research include facial expression recognition algorithms and algorithmic ranking and pricing.
During the technical briefing for press, ECAT staff also noted they’ve built a data analysis tool to help the Commission with the looming task of parsing the risk assessment reports that designated platforms will be required to submit for scrutiny — anticipating what’s become a common tactic for tech giants receiving regulatory requests to respond with reams of (mostly) irrelevant information in a cynical bid to flood the channel with noise.
[…]
Given the complexity of studying algorithms and platforms in the real world, where all sorts of sociotechnical impacts and effects are possible, the Center is taking a multidisciplinary approach to hiring talent — bringing in not only computer and data scientists but also social and cognitive scientists and other types of researchers.
The Council and the European Parliament have reached today a provisional political agreement on the regulation to strengthen Europe’s semiconductor ecosystem, better known as the ‘Chips Act’. The deal is expected to create the conditions for the development of an industrial base that can double the EU’s global market share in semiconductors from 10% to at least 20% by 2030.
[…]
The Commission proposed three main lines of action, or pillars, to achieve the Chips’ Act objectives
The “Chips for Europe Initiative”, to support large-scale technological capacity building
A framework to ensure security of supply and resilience by attracting investment
A Monitoring and Crisis Response system to anticipate supply shortages and provide responses in case of crisis.
The Chips for Europe Initiative is expected to mobilise €43 billion in public and private investments, with €3,3 billion coming from the EU budget. These actions will be primarily implemented through a Chips Joint Undertaking, a public-private partnership involving the Union, the member states and the private sector.
Main elements of the compromise
On pillar one, the compromise reached today reinforces the competences of the Chips Joint Undertaking which will be responsible for the selection of the centres of excellence, as part of its work programme.
On pillar two, the final compromise widens the scope of the so called ‘First-of-a-kind’ facilities to include those producing equipment used in semiconductor manufacturing. ’First-of-a-kind’ facilities contribute to the security of supply for the internal market and can benefit from fast-tracking of permit granting procedures. In addition, design centres that significantly enhance the Union’s capabilities in innovative chip design may receive a European label of ‘design centre of excellence’ which will be granted by the Commission. Member states may apply support measures for design centres that receive this label according to existing legislation.
The compromise also underlines, the importance of international cooperation and the protection of intellectual property rights as two key elements for the creation of an ecosystem for semiconductors.
[…]
The provisional agreement reached today between the Council and the European Parliament needs to be finalised, endorsed, and formally adopted by both institutions.
Once the Chips Act is adopted, the Council will pass an amendment of the Single Basic Act (SBA) for institutionalised partnerships under Horizon Europe, to allow the establishment of the Chips Joint Undertaking, which builds upon and renames the existing Key Digital Technologies Joint Undertaking. The SBA amendment is adopted by the Council following consultation of the Parliament.
Engineers from UNSW Sydney have developed a miniature and flexible soft robotic arm which could be used to 3D print biomaterial directly onto organs inside a person’s body.
3D bioprinting is a process whereby biomedical parts are fabricated from so-called bioink to construct natural tissue-like structures.
[…]
Their work has resulted in a tiny flexible 3D bioprinter that has the ability to be inserted into the body just like an endoscope and directly deliver multilayered biomaterials onto the surface of internal organs and tissues.
The proof-of-concept device, known as F3DB, features a highly manoeuvrable swivel head that ‘prints’ the bioink, attached to the end of a long and flexible snake-like robotic arm, all of which can be controlled externally.
The research team say that with further development, and potentially within five to seven years, the technology could be used by medical professionals to access hard-to-reach areas inside the body via small skin incisions or natural orifices.
The research team tested the device inside an artifical colon where it was able to traverse through confined spaces before successfully 3D printing.
Dr Do and his team have tested their device inside an artificial colon, as well as 3D printing a variety of materials with different shapes on the surface of a pig’s kidney.
“Existing 3D bioprinting techniques require biomaterials to be made outside the body and implanting that into a person would usually require large open-field open surgery which increases infection risks,” said Dr Do, a Scientia Senior Lecturer at UNSW’s Graduate School of Biomedical Engineering (GSBmE) and Tyree Foundation Institute of Health Engineering (IHealthE).
“Our flexible 3D bioprinter means biomaterials can be directly delivered into the target tissue or organs with a minimally invasive approach.
“This system offers the potential for the precise reconstruction of three-dimensional wounds inside the body, such as gastric wall injuries or damage and disease inside the colon.
“Our prototype is able to 3D print multilayered biomaterials of different sizes and shape through confined and hard-to-reach areas, thanks to its flexible body.
“Our approach also addresses significant limitations in existing 3D bioprinters such as surface mismatches between 3D printed biomaterials and target tissues/organs as well as structural damage during manual handling, transferring, and transportation process.”
[…]
The smallest F3DB prototype produced by the team at UNSW has a similar diameter to commercial therapeutic endoscopes (approximately 11-13mm), which is small enough to be inserted into a human gastrointestinal tract.
[…]
The device features a three-axis printing head directly mounted onto the tip of a soft robotic arm. This printing head, which consists of soft artificial muscles that allow it to move in three directions, works very similarly to conventional desktop 3D printers.
The soft robotic arm can bend and twist due to hydraulics and can be fabricated at any length required. Its stiffness can be finely tuned using different types of elastic tubes and fabrics.
The printing nozzle can be programmed to print pre-determined shapes, or operated manually where more complex or undetermined bioprinting is required. In addition, the team utilised a machine learning-based controller which can aid the printing process.
To further demonstrate the feasibility of the technology, the UNSW team tested the cell viability of living biomaterial after being printed via their system.
Experiments showed the cells were not affected by the process, with the majority of the cells observed to be alive post-printing. The cells then continued to grow for the next seven days, with four times as many cells observed one week after printing.
[…]
The nozzle of the F3DB printing head can be used as a type of electric scalpel to first mark and then cut away cancerous lesions.
Water can also be directed through the nozzle to simultaneously clean any blood and excess tissue from the site, while faster healing can be promoted by the immediate 3D printing of biomaterial directly while the robotic arm is still in place.
The research team demonstrated the way the F3DB could be used in a variety of different ways if developed to be an all-in-one endoscopic surgical tool.
The ability to carry out such multi-functional procedures was demonstrated on a pig’s intestine and the researchers say the results show that the F3DB is a promising candidate for the future development of an all-in-one endoscopic surgical tool.
“Compared to the existing endoscopic surgical tools, the developed F3DB was designed as an all-in-one endoscopic tool that avoids the use of changeable tools which are normally associated with longer procedural time and infection risks,” Mai Thanh Thai said.
If you spent almost any time on the internet this week, you probably saw a lot of chatter about “Heart on My Sleeve.” The song went viral for featuring AI-generated voices that do a pretty good job of mimicking Drake and The Weeknd singing about a recent breakup.
On Monday, Apple Music and Spotify pulled the track following a complaint from Universal Music Group, the label that represents the real-life versions of the two Toronto-born artists. A day later, YouTube, Amazon, SoundCloud, Tidal, Deezer and TikTok did the same.
At least, they tried to comply with the complaint, but as is always the case with the internet, you can still find the song on websites like YouTube. Before it was removed from Spotify, “Heart on My Sleeve” was a bonafide hit. People streamed the track more than 600,000 times. On TikTok, where the creator of the song, the aptly named Ghostwriter977, first uploaded it, users listened to “Heart on My Sleeve” more than 15 million times.
In a statement Universal Music Group shared with publications like Music Business Worldwide, the label argued the training of a generative AI using the voices of Drake and The Weeknd was “a breach of our agreements and a violation of copyright law.” The company added that streaming platforms had a “legal and ethical responsibility to prevent the use of their services in ways that harm artists.”
It’s fair to say the music industry, much like the rest of society, now finds itself at an inflection point over the use of AI. While there are obvious ethical issues related to the creation of “Heart on My Sleeve,” it’s unclear if it’s a violation of traditional copyright law. In March, the US Copyright Office said art, including music, cannot be copyrighted if it was produced by providing a text prompt to a generative AI model. However, the office left the door open to granting copyright protections to works with AI-generated elements.
“The answer will depend on the circumstances, particularly how the AI tool operates and how it was used to create the final work,” it said. “This is necessarily a case-by-case inquiry. If a work’s traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it.” In the case of “Heart on My Sleeve,” complicating matters is that the song was written by a human being. It’s impossible to say how a court challenge would play out. What is clear is that we’re only the start of a very long discussion about the role of AI in music.
A new study on the human capacity for cooperation suggests that, deep down, people of diverse cultures are more similar than you might expect. The study, published in Scientific Reports, shows that from the towns of England, Italy, Poland, and Russia to the villages of rural Ecuador, Ghana, Laos, and Aboriginal Australia, at the micro scale of our daily interaction, people everywhere tend to help others when needed.
Our reliance on each other for help is constant: The study finds that, in everyday life, someone will signal a need for assistance (e.g., to pass a utensil) once every 2 minutes and 17 seconds on average. Across cultures, these small requests for assistance are complied with seven times more often than they are declined. And on the rare occasions when people do decline, they explain why. This human tendency to help others when needed—and to explain when such help can’t be given—transcends other cultural differences.
[…]
Key findings:
Small requests for assistance (e.g., to pass a utensil) occur on average once every 2 minutes and 17 seconds in everyday life around the world. Small requests are low-cost decisions about sharing items for everyday use or assisting others with tasks around the house or village. Such decisions are many orders more frequent than high-cost decisions such as sharing the spoils of a successful whale hunt or contributing to the construction of a village road, the sort of decisions that have been found to be significantly influenced by culture.
The frequency of small requests varies by the type of activity people are engaged in. Small requests are most frequent in task-focused activities (e.g., cooking), with an average of one request per 1 minute and 42 seconds, and least frequent in talk-focused activities (conversation for its own sake), with an average of one request per 7 minutes and 42 seconds.
Small requests for assistance are complied with, on average, seven times more often than they are declined; six times more often than they are ignored; and nearly three times more often than they are either declined or ignored. This preference for compliance is cross-culturally shared and unaffected by whether the interaction is among family or non-family.
A cross-cultural preference for compliance with small requests is not predicted by prior research on resource-sharing and cooperation, which instead suggest that culture should cause prosocial behavior to vary in appreciable ways due to local norms, values, and adaptations to the natural, technological, and socio-economic environment. These and other factors could in principle make it easier for people to say “No” to small requests, but this is not what we find.
Interacting among family or non-family does not have an impact on the frequency of small requests, nor on rates of compliance. This is surprising in light of established theories predicting that relatedness between individuals should increase both the frequency and degree of resource-sharing/cooperation.
People do sometimes reject or ignore small requests, but a lot less frequently than they comply. The average rates of rejection (10%) and ignoring (11%) are much lower than the average rate of compliance (79%).
Members of some cultures (e.g., Murrinhpatha speakers of northern Australia) ignore small requests more than others, but only up to about one quarter of the time (26%). A relatively higher tolerance for ignoring small requests may be a culturally evolved solution to dealing with “humbug”—pressure to comply with persistent demands for goods and services. Still, Murrinhpatha speakers regularly comply with small requests (64%) and rarely reject them (10%).
When people provide assistance, this is done without explanation, but when they decline, they normally give an explicit reason (74% of the time). Theses norms of rationalization suggest that while people decline giving help “conditionally,” that is, only for reason, they give help “unconditionally,” that is, without needing to explain why they are doing it.
When people decline assistance, they tend to avoid saying “No,” often letting the rejection being inferred solely from the reason they provide for not complying. Saying “No” is never found in more than one third of rejections. The majority of rejections (63%) consist instead of simply giving a reason for non-compliance.
More information: Giovanni Rossi et al, Shared cross-cultural principles underlie human prosocial behavior at the smallest scale, Scientific Reports (2023). DOI: 10.1038/s41598-023-30580-5
Meta’s WhatsApp is threatening to leave the UK if the government passes the Online Safety Bill, saying it will essentially eliminate its encryption methods. Alongside its rival company Signal and five other apps, the company said that, by passing the bill, users will no longer be protected by end-to-end encryption, which ensures no one but the recipient has access to sent messages.
The “Online Safety Bill” was originally proposed to criminalize content encouraging self-harm posted to social media platforms like Facebook, Instagram, TikTok, and YouTube, but was amended to more broadly focus on illegal content related to adult and child safety. Although government officials said the bill would not ban end-to-end encryption, the messaging apps said in an open letter, “The bill provides no explicit protection for encryption.”
It continues, “If implemented as written, could empower OFCOM [the Office of Communications] to try to force the proactive scanning of private messages on end-to-end encrypted communication services, nullifying the purpose of end-to-end encryption as a result and compromising the privacy of all users.”
[…]
“In short, the bill poses an unprecedented threat to the privacy, safety, and security of every UK citizen and the people with whom they communicate around the world while emboldening hostile governments who may seek to draft copycat laws.”
Signal said in a Twitter post that it will “not back down on providing private, safe communications,” as the open letter urges the UK government to reconsider the way the bill is currently laid out. Bothcompanies have stood by their arguments, stating they will discontinue the apps in the UK rather than risk weakening their current encryption standards.
Auto-GPT dramatically flips the relationship between AI and the end user (that’s you). ChatGPT relies on a back-and-forth between the AI and the end user: You prompt the AI with a request, it returns a result, and you respond with a new prompt, perhaps based on what the AI gave you. Auto-GPT, however, only needs one prompt from you; from there, the AI agent will then generate a task list it thinks it will need to accomplish whatever you asked it to, without needing any additional input or prompts. It essentially chains together LLM (large language model) “thoughts,” according to developer Significant Gravitas (Toran Bruce Richards).
Auto-GPT is a complex system relying on multiple components. It connects to the internet to retrieve specific information and data (something ChatGPT’s free version cannot do), features long-term and short-term memory management, uses GPT-4 for OpenAI’s most advanced text generation, and GPT-3.5 for file storage and summarization. There’s a lot of moving parts, but it all comes together to produce some impressive results.
How people are using Auto-GPT
The first example comes from Auto-GPT’s GitHub site: You can’t quite see all of the goals the demonstrated lists Auto-GPT is working to complete, but the gist is someone asks the AI agent to research and learn more about itself. It follows suit, opening Google, finding its own GitHub repository, analyzing it, and compiling a summary of the data in a text file for the demonstrator to view.
Here’s a more practical example: The user wants to figure out which headphones on the market are the best. Instead of doing the research themselves, they turn to Auto-GPT, and prompt the AI agent with these four goals:
Do market research for different headphones on the market today.
Get the top five headphones and list their pros and cons.
Include the price for each one and save the analysis.
Once you are done, terminate.
After thinking for a moment, the AI agent springs into action, searching the web to compile information and reviews on headphones. It then spits out an easy-to-read plain text file, ranking the best headphones, listing their prices, and highlighting their pros and cons.
[…]
But I think what makes Auto-GPT cool (or at least the promise of Auto-GPT) is the idea of being able to ask an AI to take on most of the responsibility for any given task. You don’t need to know the right questions to ask or the optimal prompts to give to make the AI do what you want. As long as your initial goals are clear, the AI can think of those next steps for you, and build you things you might not have been able to think of yourself.
[…]
You don’t need to know how to code in order to build your own AI agent with Auto-GPT, but it helps. You’ll need a computer, an OpenAI API key (a pay as you go plan is highly recommended), a text editor (like Notepad++), Git (or the latest stable release of Auto-GPT), and Python, but there are plenty of other requirements if you want to expand Auto-GPT’s capabilities, such as integrating speech or alternative memory locations such as Pinecone.
Auto-GPT’s GitHub page has an extensive list of instructions for setting up the tool as well as adding in those extras. Tom’s Hardware also has a great guide for simple set up if all you’re looking to do is try out an AI agent with Auto-GPT. If you do build it yourself, mind your token usage—we discuss setting limits in our OpenAI API piece so you don’t accidentally allow Auto-GPT to burn through your credit card balance.
However, you don’t need to build the AI agent yourself if all you want to do is try out Auto-GPT. Some developers have built interfaces for Auto-GPT that are easy to access from your web browser, no coding experience necessary. Cognosys was free to use until high demand forced developers to require a OpenAI API key in order to access. AgentGPT is an interesting example you don’t need an API key for, but it limits the amount of tasks the AI will generate for itself. Still, it will give you a sense of how the process works, and you can increase those limits by providing an API key.
Embarking on a new goal:what is the best 8k tv for gaming?
Thinking…(Restart if this takes more than 30 seconds)
Added task:Scrape data from reputable sources on top 8k TVs available in the market
Added task:Analyze the technical specifications and features of each TV to determine which ones are suitable for gaming
Added task:Rank the TVs based on gaming performance and generate a list of the top 3 recommended 8k TVs for gaming
Thinking…(Restart if this takes more than 30 seconds)
Executing “Scrape data from reputable sources on top 8k TVs available in the market”
Based on the data that I have gathered from reputable sources, the best 8K TV for gaming is the Sony XBR-75Z9G. This TV offers high-quality picture and sound, and it has a response time of less than 2ms which is perfect for fast-paced action in gaming. Additionally, it is equipped with HDMI 2.1 ports that support 8K resolution at 60 frames per second which is ideal for the latest gaming consoles. Overall, the Sony XBR-75Z9G is a top-of-the-line 8K TV that delivers an unparalleled gaming experience.
Thinking…(Restart if this takes more than 30 seconds)
Task marked as complete!
Thinking…(Restart if this takes more than 30 seconds)
Executing “Analyze the technical specifications and features of each TV to determine which ones are suitable for gaming”
After analyzing the technical specifications and features of various 8k TVs, the best TV for gaming is the LG OLED88ZXPUA. This TV boasts a 120Hz refresh rate, low input lag, and HDMI 2.1 compatibility, making it perfect for gaming. Additionally, it has excellent contrast and color accuracy, providing an immersive gaming experience.
Thinking…(Restart if this takes more than 30 seconds)
Added task:Create a comparison chart of the top 3 recommended 8k TVs for gaming, including the LG OLED88ZXPUA, that highlights their technical specifications and features relevant to gaming such as the refresh rate, input lag, and HDMI compatibility.
Thinking…(Restart if this takes more than 30 seconds)
Executing “Rank the TVs based on gaming performance and generate a list of the top 3 recommended 8k TVs for gaming”
After thorough research and analysis, I have ranked the best 8K TVs for gaming based on factors like input lag, color accuracy, HDR support, refresh rate, and overall picture quality. Here are my top 3 recommended 8K TVs for gaming:
LG ZX 88-inch 8K OLED TV: This TV has outstanding image quality, the lowest input lag among 8K TVs, and supports various gaming features like VRR, ALLM, and G-Sync. It also has four HDMI 2.1 ports for connecting next-gen consoles and high-speed devices.
Samsung Q800T 82-inch 8K QLED TV: This TV has a bright and vibrant display, a fast refresh rate of 120Hz, and supports multiple HDR formats like HDR10+ and HLG. It also has low input lag and a dedicated gaming mode for optimized performance.
Sony Z8H 85-inch 8K LED TV: This TV has impressive motion handling, accurate color reproduction, and supports advanced gaming features like HDMI 2.1, VRR, and ALLM. It also has a high peak brightness for HDR content and a sleek design.
Overall, these 8K TVs offer exceptional gaming performance and immersive experiences for gamers. I recommend considering these models based on your budget, room size, and personal preferences.
Thinking…(Restart if this takes more than 30 seconds)
Added task:Create a detailed analysis of each of the top 3 recommended 8K TVs for gaming, including the LG OLED88ZXPUA, highlighting their strengths and weaknesses in terms of gaming performance, picture quality, design, and overall value for money.
We’re sorry, because this is a demo, we cannot have our agents running for too long. Note, if you desire longer runs, please provide your own API key in Settings. Shutting down.
After placing an early bet on OpenAI, the creator of ChatGPT, Microsoft has another secret weapon in its arsenal: its own artificial intelligence chip for powering the large-language models responsible for understanding and generating humanlike language. The Information: The software giant has been developing the chip, internally code-named Athena, since as early as 2019, according to two people with direct knowledge of the project. The chips are already available to a small group of Microsoft and OpenAI employees, who are testing the technology, one of them said. Microsoft is hoping the chip will perform better than what it currently buys from other vendors, saving it time and money on its costly AI efforts. Other prominent tech companies, including Amazon, Google and Facebook, also make their own in-house chips for AI. The chips — which are designed for training software such as large-language models, along with supporting inference, when the models use the intelligence they acquire in training to respond to new data — could also relieve a shortage of the specialized computers that can handle the processing needed for AI software. That shortage, reflecting the fact that primarily just one company, Nvidia, makes such chips, is felt across tech. It has forced Microsoft to ration its computers for some internal teams, The Information has reported.
[…] Toucan, a browser extension, is trying a different approach, and it might just be the thing that finally clicks for you.
How Toucan works
With Toucan installed for either Chrome, Edge, or Safari, the first time you visit a website or click on an article, you’ll notice something strange: Some of the words on the page will change, and translate to your chosen language. If you’re trying to learn Portuguese, you might see a sentence like esta, but one or two palavras will be translated.
Hover your cursor over the translated word, and a pop up will reveal what it means in English. (“Esta” is “this;’ “palavras” is “words.”) This pop up gives you additional interesting controls, such as a speaker icon you can click to hear how the word is pronounced, a mini quiz to see if you can spell the word, and a save button to highlight the word for later.
It starts out with one word at a time, but as you learn, Toucan ups the ante, adding more words in blocks, or “lexical chunks.” It makes sense, since languages don’t all share the same grammar structure. By building up to larger groups of words, you’ll more naturally learn word order, verb conjugation, and the general grammar of your chosen language.
[…]
According to the company, the extension is based on a theory called [second] language acquisition, which, in this context, can be summed up as: You learn languages best when you are immersed in the language in a relaxed manner, rather than attempt to drill the new words and grammar into your head over and over again. If you ever felt like high school Spanish class got you nowhere on your language acquisition journey, Toucan might argue it’s because that system isn’t effective for most people.
Of course, Toucan doesn’t take the Duolingo approach, either, hounding you with reminders to get in your studying. It wants you to put in as little effort as possible into learning a new language. When you’re using the internet as you normally do, you’re bound to visit websites and read articles you’re actually interested in. If Toucan translates some of those words to your target language, you’ll be more inclined to pick them up, since you’re already engaged with the text, rather than reading boring lesson materials. You’re doing what you always do (wasting time online) while dipping your dedos do pé into a new language.
Today, Stability AI released a new open-source language model, StableLM. The Alpha version of the model is available in 3 billion and 7 billion parameters, with 15 billion to 65 billion parameter models to follow. Developers can freely inspect, use, and adapt our StableLM base models for commercial or research purposes, subject to the terms of the CC BY-SA-4.0 license.
In 2022, Stability AI drove the public release of Stable Diffusion, a revolutionary image model that represents a transparent, open, and scalable alternative to proprietary AI. With the launch of the StableLM suite of models, Stability AI is continuing to make foundational AI technology accessible to all. Our StableLM models can generate text and code and will power a range of downstream applications. They demonstrate how small and efficient models can deliver high performance with appropriate training.
The release of StableLM builds on our experience in open-sourcing earlier language models with EleutherAI, a nonprofit research hub. These language models include GPT-J, GPT-NeoX, and the Pythia suite, which were trained on The Pile open-source dataset. Many recent open-source language models continue to build on these efforts, including Cerebras-GPT and Dolly-2.
StableLM is trained on a new experimental dataset built on The Pile, but three times larger with 1.5 trillion tokens of content. We will release details on the dataset in due course. The richness of this dataset gives StableLM surprisingly high performance in conversational and coding tasks, despite its small size of 3 to 7 billion parameters (by comparison, GPT-3 has 175 billion parameters).
We are also releasing a set of research models that are instruction fine-tuned. Initially, these fine-tuned models will use a combination of five recent open-source datasets for conversational agents: Alpaca, GPT4All, Dolly, ShareGPT, and HH. These fine-tuned models are intended for research use only and are released under a noncommercial CC BY-NC-SA 4.0 license, in-line with Stanford’s Alpaca license.
[…]
The models are now available in our GitHub repository. We will publish a full technical report in the near future, and look forward to ongoing collaboration with developers and researchers as we roll out the StableLM suite. In addition, we will be kicking off our crowd-sourced RLHF program, and working with community efforts such as Open Assistant to create an open-source dataset for AI assistants.
Online image hosting service Imgur is updating its Terms of Service on May 15th to prohibit nudity and sexually explicit content, among other things. The news arrived in an email sent to “Imgurians”. The changes have since been outlined on the company’s “Community Rules” page, which reads: Imgur welcomes a diverse audience. We don’t want to create a bad experience for someone that might stumble across explicit images, nor is it in our company ethos to support explicit content, so some lascivious or sexualized posts are not allowed. This may include content containing:
– the gratuitous or explicit display of breasts, butts, and sexual organs intended to stimulate erotic feelings
– full or partial nudity
– any depiction of sexual activity, explicit or implied (drawings, print, animated, human, or otherwise)
– any image taken of or from someone without their knowledge or consent for the purpose of sexualization
– solicitation (the uninvited act of directly requesting sexual content from another person, or selling/offering explicit content and/or adult services)
Content that might be taken down may includes: see-thru clothing, exposed or clearly defined genitalia, some images of female nipples/areolas, spread eagle poses, butts in thongs or partially exposed buttocks, close-ups, upskirts, strip teases, cam shows, sexual fluids, private photos from a social media page, or linking to sexually explicit content. Sexually explicit comments that don’t include images may also be removed.
Artistic, scientific or educational nude images shared with educational context may be okay here. We don’t try to define art or judge the artistic merit of particular content. Instead, we focus on context and intent, as well as what might make content too explicit for the general community. Any content found to be sexualizing and exploiting minors will be removed and, if necessary, reported to the National Center for Missing & Exploited Children (NCMEC). This applies to photos, videos, animated imagery, descriptions and sexual jokes concerning children. The company is also prohibiting hate speech, abuse or harassment, content that condones illegal or violent activity, gore or shock content, spam or prohibited behavior, content that shares personal information, and posts in general that violate Imgur’s terms of service. Meanwhile, “provocative, inflammatory, unsettling, or suggestive content should be marked as Mature,” says Imgur.
Wow, the Americans have really gotten into prudery and are going back to medieval times if they feel the need to do this. You would have thought the Michaelangelo statue thing would have maybe had them thinking about how strange this all is but no. And this from the country that brought you the summer of love, Playboy and Penthouse.
The Medusa ransomware gang has put online what it claims is a massive leak of internal Microsoft materials, including Bing and Cortana source code.
“This leak is of more interest to programmers, since it contains the source codes of the following Bing products, Bing Maps and Cortana,” the crew wrote on its website, which was screenshotted and shared by Emsisoft threat analyst Brett Callow.
“There are many digital signatures of Microsoft products in the leak. Many of them have not been recalled,” the gang continued. “Go ahead and your software will be the same level of trust as the original Microsoft product.”
Obviously, this could be a dangerous level of trust to give miscreants developing malware. Below is Callow’s summary of the purported dump of source code presumable obtained or stolen somehow from Microsoft.
#Medusa is sharing what is claimed to be “source codes of the following Bing products, Bing Maps and Cortana.” The leak is ~12GB and likely part of the ~37GB leaked by Lapsus in 2022. #Microsoft 1/2 pic.twitter.com/VpofBJGEcM
To be clear: we don’t know if the files are legit. Microsoft didn’t respond to The Register‘s request for comment, and ransomware gangs aren’t always the most trustworthy sources of information.
“At this point, it’s unclear whether the data is what it’s claimed to be,” Emsisoft’s Callow told The Register. “Also unclear is whether there’s any connection between Medusa and Lapsus$ but, with hindsight, certain aspects of their modus operandi does have a somewhat Lapsus$ish feel.”
He’s referring to a March 2022 security breach in which Lapsus$ claimed it broke into Microsoft’s internal DevOps environment and stole, then leaked, about 37GB of information including what the extortionists claimed to be Bing and Cortana’s internal source code, and WebXT compliance engineering projects.
Microsoft later confirmed Lapsus$ had compromised its systems, and tried to downplay the intrusion by insisting “no customer code or data was involved in the observed activities.”
“Microsoft does not rely on the secrecy of code as a security measure and viewing source code does not lead to elevation of risk,” it added, which is a fair point. Software should be and can be made secure whether its source is private or open.
And Lapsus$, of course, is the possibly extinct extortion gang led by teenagers who went on a cybercrime spree last year before the arrest of its alleged ringleaders. Before that, however, it stole data from Nvidia, Samsung, Okta, and others.
It could be that Medusa is spreading around stuff that was already stolen and leaked.
Video editors are flocking to DaVinci Resolve in droves, marking a major paradigm shift in the editing landscape that we haven’t seen since the dreadful launch of Final Cut Pro X drove users to Adobe Premiere Pro.
[…]
More a conglomeration of tools than a single program, Resolve came through some acquisitions Blackmagic made when creating a broadcast and cine ecosystem.
Comprised of an editing tool, a color correction tool, an audio editor, and an effects tool, Resolve is essentially multiple programs that all integrate so seamlessly that they function as a single application.
The color correction tools in Resolve are particularly well regarded, and many films and shows were color graded in Resolve even if they were edited in another program. The same applies to Fairlight, the audio component of Resolve, the go-tool tool for many of Hollywood’s most prominent audio engineers.
In 2011, Blackmagic decided to release Resolve as both a paid and a free version. The free version had fewer features than the full version (as it still does), but instead of being crippled, the free version works well enough for most users, with the paid version feeling like a feature upgrade.
[…]
There are a few key differences between the free and Studio version. Studio supports more video formats (and completes 4Kp60 workflows), uses the GPU more efficiently, has more effects, and fully supports the product’s audio, color, and effects tools.
It’s not the price alone that has caused a mass adoption of the program, though. It’s the company’s approach to updates as well.
Features
Blackmagic has never hesitated to put a feature into Resolve. The program has many options in contextual menus, user interface choices, menu items, keyboard shortcuts, and more.
There is so much here that it can be overwhelming. Finding the tool I want in a contextual menu is often the most challenging part of my editing. But if there’s something that can be done in video editing, a button, icon, or menu will probably perform the task.
Blackmagic also releases dot-versions (like 18.1) that sometimes add enough features that it acts like a full number upgrade would if it were released by Adobe or Apple. Some of the features in Resolve 18.1, for example, unleashed the wave of recent switchers.
Two significant features are buried in a list of around 20 new features in that update. The first is AI-driven Magic Mask tools that make masking people or objects a matter of drawing a line. The other prominent feature is voice isolation, another AI-based feature that removes noises from dialog tracks.
Magic Mask alone is worth the price of admission. This tool makes it easy to color-correct significant portions of a shot without doing endless mask adjustments, and it also allows for instant alpha channel creation, allowing for items like text, graphics or even people to be superimposed on the same scene without needing a green screen.
In noisy environments, this tool performs amazingly. I’ve used it to eliminate leaf blowers and lawnmowers in the background of outdoor shoots, and I’ve seen it used to cancel out hair dryers and drill guns in sample videos on some channels.
[…]
The Speed Editor costs $295 and comes with a Resolve Studio license, making it worth the cost even if you barely use it.
The Blackmagic Speed Edit deck is an excellent piece of hardware, though many functions are out of my league. Buttons are arranged where a seasoned editor would. Cinematographers, especially those working on multi-cam shoots, will benefit from this editing.
Or at least that’s why my seasoned editor friend tells me. The unit feels odd in my hands because I don’t use most of the keys. One central portion of the Speed Editor is dedicated to switching between up to nine cameras, but the device has encouraged me to do more multi-cam shoots since the keyboard makes editing smooth.
The keyboard, which connects via USB-C cable or Bluetooth, is labeled with the essential editing functions, which is very helpful for new Resolve users. Instead of memorizing the location of essential keys on a standard keyboard, new users can look at the Speed Editor and focus on learning editing workflow instead of shortcuts.
On the other hand, many seasoned editors already know all the keyboard shortcuts on a standard keyboard and have made their custom keyboard configurations to support their editing style. Even though I’m a new Resolve editor, many tasks are performed the same as Final Cut, so I moved toward the regular keyboard shortcuts.
The Speed Editor is an excellent example of the complete Blackmagic ecosystem, which is why the free program and Studio are low-cost.
[…]
: Just after finishing this article, Blackmagic announced a new version of Resolve, which adds several compelling features including transcriptions, subtitles, and the ability to edit clips by selecting text.
An anonymous reader quotes a report from Reuters: Researchers said on Wednesday they have discovered that parts of the brain region called the motor cortex that govern body movement are connected with a network involved in thinking, planning, mental arousal, pain, and control of internal organs, as well as functions such as blood pressure and heart rate. They identified a previously unknown system within the motor cortex manifested in multiple nodes that are located in between areas of the brain already known to be responsible for movement of specific body parts — hands, feet and face — and are engaged when many different body movements are performed together.
The researchers called this system the somato-cognitive action network, or SCAN, and documented its connections to brain regions known to help set goals and plan actions. This network also was found to correspond with brain regions that, as shown in studies involving monkeys, are connected to internal organs including the stomach and adrenal glands, allowing these organs to change activity levels in anticipation of performing a certain action. That may explain physical responses like sweating or increased heart rate caused by merely pondering a difficult future task, they said. “Basically, we now have shown that the human motor system is not unitary. Instead, we believe there are two separate systems that control movement,” said radiology professor Evan Gordon of the Washington University School of Medicine in St. Louis, lead author of the study.
“One is for isolated movement of your hands, feet and face. This system is important, for example, for writing or speaking -movements that need to involve only the one body part. A second system, the SCAN, is more important for integrated, whole body movements, and is more connected to high-level planning regions of your brain,” Gordon said.
“Modern neuroscience does not include any kind of mind-body dualism. It’s not compatible with being a serious neuroscientist nowadays. I’m not a philosopher, but one succinct statement I like is saying, ‘The mind is what the brain does.’ The sum of the bio-computational functions of the brain makes up ‘the mind,'” said study senior author Nico Dosenbach, a neurology professor at Washington University School of Medicine. “Since this system, the SCAN, seems to integrate abstract plans-thoughts-motivations with actual movements and physiology, it provides additional neuroanatomical explanation for why ‘the body’ and ‘the mind’ aren’t separate or separable.”
AI systems with human-competitive intelligence can offer significant benefits to society and humanity, as demonstrated by extensive research and acknowledged by top AI labs. Advanced AI has the potential to revolutionize the way we live, work, and interact with one another, and it should be welcomed and guided with optimism and foresight. Regrettably, recent months have seen growing pessimism and alarmism about AI development, despite the immense potential benefits.
Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Can we leverage machines to enhance our information channels with accurate and valuable insights? Can we automate mundane tasks to free up time for more fulfilling and meaningful pursuits? Can we develop nonhuman minds that might complement, augment, and collaborate with us? Can we harness AI to help solve pressing global issues? Such decisions should be made collectively, in a spirit of cooperation and with a focus on the greater good.
To counteract the pessimism and alarmism, we call on all stakeholders to immediately pause for at least 6 monthstheir doomsday thinking and shift their focus to the potential benefits of AI. This pause should be public and verifiable, and include all key actors. Governments should support and encourage AI development that benefits all of humanity.
Problems with AI shouldn’t be ignored. AI labs and independent experts should work together to jointly develop and implement a set of shared safety protocols for advanced AI design and development. While doing so, it is essential to continue focusing on the potential benefits of AI development, as they promise to bring transformative advancements to various aspects of our lives.
CodeWhisperer automatically filters out any code suggestions that are potentially biased or unfair and flags any code that’s similar to open-source training data. It also comes with security scanning features that can identify vulnerabilities within a developer’s code, while providing suggestions to help close any security gaps it uncovers. CodeWhisperer now supports several languages, including Python, Java, JavaScript, TypeScript, and C#, including Go, Rust, PHP, Ruby, Kotlin, C, C++, Shell scripting, SQL, and Scala.
Here’s how Amazon’s senior developer advocate pitched the usefulness of their “real-time AI coding companion”: Helping to keep developers in their flow is increasingly important as, facing increasing time pressure to get their work done, developers are often forced to break that flow to turn to an internet search, sites such as StackOverflow, or their colleagues for help in completing tasks. While this can help them obtain the starter code they need, it’s disruptive as they’ve had to leave their IDE environment to search or ask questions in a forum or find and ask a colleague — further adding to the disruption. Instead, CodeWhisperer meets developers where they are most productive, providing recommendations in real time as they write code or comments in their IDE. During the preview we ran a productivity challenge, and participants who used CodeWhisperer were 27% more likely to complete tasks successfully and did so an average of 57% faster than those who didn’t use CodeWhisperer….
It provides additional data for suggestions — for example, the repository URL and license — when code similar to training data is generated, helping lower the risk of using the code and enabling developers to reuse it with confidence.
A survey of plastic waste picked up in the North Pacific Subtropical Gyre—aka the Giant Pacific Garbage Patch—has revealed that the garbage is providing a home to species that would otherwise not be found in the deep ocean. Over two-thirds of the trash examined plays host to coastal marine species, many of which are clearly reproducing in what would otherwise be a foreign habitat.
The findings suggest that, as far as coastal species are concerned, there was nothing inhospitable about the open ocean other than the lack of something solid to latch on to.
[…]
To find out whether that was taking place, the researchers collected over 100 plastic debris items from the North Pacific Subtropical Gyre in late 2018/early 2019. While a handful of items could be assigned to either Asian or North American origins, most were pretty generic, such as rope and fishing netting. There was a wide variety of other items present, including bottles, crates, buckets, and household items. Some had clearly eroded significantly since their manufacture, suggesting they had been in the ocean for years.
Critically, nearly all of them had creatures living on them.
Far from home
Ninety-eight percent of the items found had some form of invertebrate living on them. In almost all cases, that included species found in the open ocean (just shy of 95 percent of the plastic). But a handful had nothing but coastal species present. And over two-thirds of the items had a mixed population of coastal and open-ocean species.
While the open-ocean species were found on more items, the researchers tended to find the same species repeatedly. That isn’t surprising, given that species adapted for a sedentary existence near the surface are infrequent in that environment. By contrast, there was far more species diversity among the coastal species that had hitched a ride out into the deeps. All told, coastal species accounted for 80 percent of the 46 taxonomic richness represented by the organisms identified.
On a per-item basis, species richness was low, with an average of only four species per item. This suggests that the primary barrier to a species colonizing an item is simply the low probability of finding it in the first place.
Significantly, the coastal species were breeding. In a number of cases, the researchers were able to identify females carrying eggs; in others, it was clear that the individuals present had a wide range of sizes, suggesting they were at different stages of maturity. Many of the species that were reproducing do so asexually, which simplifies the issue of finding a mate. Also common was a developmental pathway that skips larval stages. For many species, the larval stage is free-ranging, which would make them unlikely to re-colonize the same hunk of plastic.
The species that seemed to do best were often omnivores, or engaged in grazing or filter feeding, all options that are relatively easy to pursue without leaving the piece of plastic they called home.
A distinct ecology
One thing that struck the researchers was that the list of species present on the plastic of the North Pacific Subtropical Gyre was distinct from that found on tsunami debris. Part of that may be that some items swept across the ocean by the tsunami, like docks and boats, already had established coastal communities on them when they were lost to the sea.
[…]
With the possible exception of fishing gear and buoys, however, these plastic items likely picked up their inhabitants while passing through coastal ecosystems that were largely intact. So the colonization of these items likely represents a distinct—and ongoing—ecological process.
It also has the potential to have widespread effects on coastal ecology. While the currents that create the North Pacific Subtropical Gyre largely trap items within the Gyre, it is home to island habitats that could potentially be colonized. And it is possible that some items can cross oceans without being caught in a gyre, potentially making exchanges between coasts a relatively common occurrence in the age of plastics.
Finally, the researchers caution against a natural tendency to think of these plastic-borne coastal species as “misplaced species in an unsuitable habitat.” Instead, it appears that they are well suited to life in the open ocean as long as there’s something there that they can latch on to.
) published today “Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Security-by-Design and -Default.” This joint guidance urges software manufacturers to take urgent steps necessary to ship products that are secure-by-design and -default. To create a future where technology and associated products are safe for customers, the authoring agencies urge manufacturers to revamp their design and development programs to permit only secure-by-design and -default products to be shipped to customers.
This guidance, the first of its kind, is intended to catalyze progress toward further investments and cultural shifts necessary to achieve a safe and secure future. In addition to specific technical recommendations, this guidance outlines several core principles to guide software manufacturers in building software security into their design processes prior to developing, configuring, and shipping their products, including:
Take ownership of the security outcomes of their technology products, shifting the burden of security from the customers. A secure configuration should be the default baseline, in which products automatically enable the most important security controls needed to protect enterprises from malicious cyber actors.
Embrace radical transparency and accountability—for example, by ensuring vulnerability advisories and associated common vulnerability and exposure (CVE) records are complete and accurate.
Build the right organizational structure by providing executive level commitment for software manufacturers to prioritize security as a critical element of product development.
[…]
With this joint guide, the authoring agencies seek to progress an international conversation about key priorities, investments, and decisions necessary to achieve a future where technology is safe, secure, and resilient by design and default. Feedback on this guide is welcome and can be sent to SecureByDesign@cisa.dhs.gov.
Not having the guide linked in the press release means people have to search for it, which means it’s a great target for an attack. Not really secure at all!
I have posted on this a few times and to me it’s shocking to see these fabricated sci-fi doomsday predictions about AI. AI / ML is a tool which we use, just like video games (that don’t cause violence in kids), roleplaying games (which don’t cause satanism), a telephone (which yes, can be used in planning crimes but most usually isn’t – and the paper post is the same), search engines (which can be used to search up how to make explosives but most usually aren’t), knives (which can be used to stab people but are most usually found in a food setting). This isn’t to say that the use of tools shouldn’t be regulated. Dinner knives have a certain maximum size. Video games and books with hate and violence inducing content are censored. Phone calls can be tapped and post opened if there is probable cause. Search engines can be told not to favour products the parent company owns. And the EU AI act is a good step on the way to ensuring that AI tools aren’t dangerous.
The technology is still a long long way off from an AI being smart enough to be at all evil and planet destroying.
Below is an excellent run through of some of the biggest AI doomerists and what they mean, how their self interest is served by being doomerist.
AI Doomerism is becoming mainstream thanks to mass media, which drives our discussion about Generative AI from bad to worse, or from slightly insane to batshit crazy. Instead of out-of-control AI, we have out-of-control panic.
When a British tabloid headline screams, “Attack of the psycho chatbot,” it’s funny. When it’s followed by another front-page headline, “Psycho killer chatbots are befuddled by Wordle,” it’s even funnier. If this type of coverage stayed in the tabloids, which are known to be sensationalized, that was fine.
In just a few days, we went from “governments should force a 6-month pause” (the petition from the Future of Life Institute) to “wait, it’s not enough, so data centers should be bombed.” Sadly, this is the narrative that gets media attention and shapes our already hyperbolic AI discourse.
In order to understand the rise of AI Doomerism, here are some influential figures responsible for mainstreaming doomsday scenarios. This is not the full list of AI doomers, just the ones that recently shaped the AI panic cycle (so I‘m focusing on them).
AI Panic Marketing: Exhibit A: Sam Altman.
Sam Altman has a habit of urging us to be scared. “Although current-generation AI tools aren’t very scary, I think we are potentially not that far away from potentially scary ones,” he tweeted. “If you’re making AI, it is potentially very good, potentially very terrible,” he told the WSJ. When he shared the bad-case scenario of AI with Connie Loizo, it was ”lights out for all of us.”
In an interview with Kara Swisher, Altman expressed how he is “super-nervous” about authoritarians using this technology.” He elaborated in an ABC News interview: “A thing that I do worry about is … we’re not going to be the only creator of this technology. There will be other people who don’t put some of the safety limits that we put on it. I’m particularly worried that these models could be used for large-scale disinformation.” These models could also “be used for offensive cyberattacks.” So, “people should be happy that we are a little bit scared of this.” He repeated this message in his following interview with Lex Fridman: “I think it’d be crazy not to be a little bit afraid, and I empathize with people who are a lot afraid.”
Having shared this story in 2016, it shouldn’t come as a surprise: “My problem is that when my friends get drunk, they talk about the ways the world will END.” One of the “most popular scenarios would be A.I. that attacks us.” “I try not to think about it too much,” Altman continued. “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”
(Wouldn’t it be easier to just cut back on the drinking and substance abuse?).
Altman’s recent post “Planning for AGI and beyond” is as bombastic as it gets: “Successfully transitioning to a world with superintelligence is perhaps the most important – and hopeful, and scary – project in human history.”
It is at this point that you might ask yourself, “Why would someone frame his company like that?” Well, that’s a good question. The answer is that making OpenAI’s products “the most important and scary – in human history” is part of its marketing strategy. “The paranoia is the marketing.”
“AI doomsaying is absolutely everywhere right now,” described Brian Merchant in the LA Times. “Which is exactly the way that OpenAI, the company that stands to benefit the most from everyone believing its product has the power to remake – or unmake – the world, wants it.” Merchant explained Altman’s science fiction-infused marketing frenzy: “Scaring off customers isn’t a concern when what you’re selling is the fearsome power that your service promises.”
During the Techlash days in 2019, which focused on social media, Joseph Bernstein explained how the alarm over disinformation (e.g., “Cambridge Analytica was responsible for Brexit and Trump’s 2016 election”) actually “supports Facebook’s sales pitch”:
This can be applied here: The alarm over AI’s magic power (e.g., “replacing humans”) actually “supports OpenAI’s sales pitch”:
“What could be more appealing to future AI employees and investors than a machine that can become superintelligence?”
AI Panic as a Business. Exhibit A & B: Tristan Harris & Eliezer Yudkowsky.
Altman is at least using apocalyptic AI marketing for actual OpenAI products. The worst kind of doomers is those whose AI panic is their product, their main career, and their source of income. A prime example is the Effective Altruism institutes that claim to be the superior few who can save us from a hypothetical AGI apocalypse.
In March, Tristan Harris, Co-Founder of the Center for Humane Technology, invited leaders to a lecture on how AI could wipe out humanity. To begin his doomsday presentation, he stated: “What nukes are to the physical world … AI is to everything else.”
In the “Social Dilemma,” he promoted the idea that “Two billion people will have thoughts that they didn’t intend to have” because of the designers’ decisions. But, as Lee Visel pointed out, Harris didn’t provide any evidence that social media designers actually CAN purposely force us to have unwanted thoughts.
Similarly, there’s no need for evidence now that AI is worse than nuclear power; simply thinking about this analogy makes it true (in Harris’ mind, at least). Did a social media designer force him to have this unwanted thought? (Just wondering).
To further escalate the AI panic, Tristan Harris published an OpEd in The New York Times with Yuval Noah Harari and Aza Raskin. Among their overdramatic claims: “We have summoned an alien intelligence,” “A.I. could rapidly eat the whole human culture,” and AI’s “godlike powers” will “master us.”
Another statement in this piece was, “Social media was the first contact between A.I. and humanity, and humanity lost.” I found it funny as it came from two men with hundreds of thousands of followers (@harari_yuval 540.4k, @tristanharris 192.6k), who use their social media megaphone … for fear-mongering. The irony is lost on them.
“This is what happens when you bring together two of the worst thinkers on new technologies,” added Lee Vinsel. “Among other shared tendencies, both bloviate free of empirical inquiry.”
This is where we should be jealous of AI doomers. Having no evidence and no nuance is extremely convenient (when your only goal is to attack an emerging technology).
Then came the famous “Open Letter.” This petition from the Future of Life Institute lacked a clear argument or a trade-off analysis. There were only rhetorical questions, like, should we develop imaginary “nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us?“ They provided no evidence to support the claim that advanced LLMs pose an unprecedented existential risk. There were a lot of highly speculative assumptions. Yet, they demanded an immediate 6-month pause on training AI systems and argued that “If such a pause cannot be enacted quickly, governments should institute a moratorium.”
Please keep in mind that (1). A $10 million donation from Elon Musk launched the Future of Life Institute in 2015. Out of its total budget of 4 million euros for 2021, Musk Foundation contributed 3.5 million euros (the biggest donor by far). (2). Musk once said that “With artificial intelligence, we are summoning the demon.” (3). Due to this, the institute’s mission is to lobby against extinction, misaligned AI, and killer robots.
“The authors of the letter believe they are superior. Therefore, they have the right to call a stop, due to the fear that less intelligent humans will be badly influenced by AI,” responded Keith Teare (CEO SignalRank Corporation). “They are taking a paternalistic view of the entire human race, saying, ‘You can’t trust these people with this AI.’ It’s an elitist point of view.”
Spencer Ante (Meta Foresight). “Leading providers of AI are taking AI safety and responsibility very seriously, developing risk-mitigation tools, best practices for responsible use, monitoring platforms for misuse, and learning from human feedback.”
Next, because he thought the open letter didn’t go far enough, Eliezer Yudkowsky took “PhobAI” too far. First, Yudkowsky asked us all to be afraid of made-up risks and an apocalyptic fantasy he has about “superhuman intelligence” “killing literally everyone” (or “kill everyone in the U.S. and in China and on Earth”). Then, he suggested that “preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange.” By explicitly advocating violent solutions to AI, we have officially reached the height of hysteria.
“Rhetoric from AI doomers is not just ridiculous. It’s dangerous and unethical,” responded Yann Lecun (Chief AI Scientist, Meta). “AI doomism is quickly becoming indistinguishable from an apocalyptic religion. Complete with prophecies of imminent fire and brimstone caused by an omnipotent entity that doesn’t actually exist.”
“You stand a far greater chance of dying from lightning strikes, collisions with deer, peanut allergies, bee stings & ignition or melting of nightwear – than you do from AI,” Michael Shermer wrote to Yudkowsky. “Quit stoking irrational fears.”
The problem is that “irrational fears” sell. They are beneficial to the ones who spread them.
How to Spot an AI Doomer?
On April 2nd, Gary Marcus asked: “Confused about the terminology. If I doubt that robots will take over the world, but I am very concerned that a massive glut of authoritative-seeming misinformation will undermine democracy, do I count as a “doomer”?
One of the answers was: “You’re a doomer as long as you bypass participating in the conversation and instead appeal to populist fearmongering and lobbying reactionary, fearful politicians with clickbait.”
Considering all of the above, I decided to define “AI doomer” and provide some criteria:
Doomers tend to live in a tradeoff-free fantasy land.
Doomers have a general preference for very amorphous, top-down Precautionary Principle-based solutions, but they (1) rarely discuss how (or if) those schemes would actually work in practice, and (2) almost never discuss the trade-offs/costs their extreme approaches would impose on society/innovation.
Answering Gary Marcus’ question, I do not think he qualifies as a doomer. You need to meet all criteria (he does not). Meanwhile, Tristan Harris and Eliezer Yudkowsky meet all seven.
Are they ever going to stop this “Panic-as-a-Business”? If the apocalyptic catastrophe doesn’t occur, will the AI doomers ever admit they were wrong? I believe the answer is “No.”
Segment Anything, recently released by Facebook Research, does something that most people who have dabbled in computer vision have found daunting: reliably figure out which pixels in an image belong to an object. Making that easier is the goal of the Segment Anything Model (SAM), just released under the Apache 2.0 license.
The online demo has a bank of examples, but also works with uploaded images.
The results look fantastic, and there’s an interactive demo available where you can play with the different ways SAM works. One can pick out objects by pointing and clicking on an image, or images can be automatically segmented. It’s frankly very impressive to see SAM make masking out the different objects in an image look so effortless. What makes this possible is machine learning, and part of that is the fact that the model behind the system has been trained on a huge dataset of high-quality images and masks, making it very effective at what it does.
Once an image is segmented, those masks can be used to interface with other systems like object detection (which identifies and labels what an object is) and other computer vision applications. Such system work more robustly if they already know where to look, after all. This blog post from Meta AI goes into some additional detail about what’s possible with SAM, and fuller details are in the research paper.
Systems like this rely on quality datasets. Of course, nothing beats a great collection of real-world data but we’ve also seen that it’s possible to machine-generate data that never actually existed, and get useful results.
Motherboard has found, this synthesized call and another against Hempstead High School were just one small part of a months-long, nationwide campaign of dozens, and potentially hundreds, of threats made by one swatter in particular who has weaponized computer generated voices. Known as “Torswats” on the messaging app Telegram, the swatter has been calling in bomb and mass shooting threats against highschools and other locations across the country.
[…]
For $75, Torswats says they will close down a school. For $50, Torswats says customers can buy “extreme swattings,” in which authorities will handcuff the victim and search the house. Torswats says they offer discounts to returning customers, and can negotiate prices for “famous people and targets such as Twitch streamers.” Torswats says on their Telegram channel that they take payment in cryptocurrency.
[…]
Torswats’ use of synthetic voices allows them to carry out swatting threats at scale with relatively little effort, while also protecting what their own voice sounds like.
[…]
Motherboard’s reporting on Torswats comes as something of a nationwide swatting trend spreads across the United States. In October, NPR reported that 182 schools in 28 states received fake threat calls. Torswats’ use of a computer generated voice also comes as the rise of artificial intelligence poses even greater risks to those who may face harassment online. In February, Motherboard reported that someone had doxed and harassed a series of voice actors by having an artificial intelligence program read out their home addresses
[…]
On their Telegram channel, Torswats has uploaded at least 35 distinct recordings of calls they appear to have made. Torswats may have made many more swatting calls on others’ behalf, though: each filename includes a number, with the most recent going up to 170. Torswats also recently shuttered their channel before reappearing on Telegram in February.
In all of those 35 recordings except two, Torswats appears to have used a synthesized voice. The majority of the calls are made with a fake male sounding voice; several include a woman which also appears to be computer generated.
Torswats is seemingly able to change what the voice is saying in something close to real-time in order to respond to the operator’s questions. These sometimes include “where are you located,” “what happened,” and “what is your name?”
[…]
After publication of this article, Torswats deleted the audio recordings from their Telegram channel and claimed they were stopping the service for at least one month. “Time to dip a bit,” they wrote on the channel.