Two people charged with hacking Ring security cameras to livestream swattings

In a reminder of smart home security’s dark side, two people hacked Ring security cameras to livestream swattings, according to a Los Angeles grand jury indictment (according to a report from Bloomberg). The pair called in hoax emergencies to authorities and livestreamed the police response on social media in late 2020.

James Thomas Andrew McCarty, 20, of Charlotte, North Carolina, and Kya Christian Nelson, 21, of Racine, Wisconsin, hacked into Yahoo email accounts to gain access to 12 Ring cameras across nine states in November 2020 (disclaimer: Yahoo is Engadget’s parent company). In one of the incidents, Nelson claimed to be a minor reporting their parents for firing guns while drinking alcohol. When police arrived, the pair used the Ring cameras to taunt the victims and officers while livestreaming — a pattern appearing in several incidents, according to prosecutors.

[…]

Although the smart devices can deter things like robberies and “porch pirates,” Amazon admits to providing footage to police without user consent or a court order when it believes someone is in danger. Inexplicably, the tech giant made a zany reality series using Ring footage, which didn’t exactly quell concerns about the tech’s Orwellian side.

Source: Two people charged with hacking Ring security cameras to livestream swattings | Engadget

Amazing that people don’t realise that Amazon is creating a total and constant surveillance system with hardware that you paid for.

Meta agrees to $725 million settlement in Cambridge Analytica class-action lawsuit

It’s been four years since Facebook became embroiled in its biggest scandal to date: Cambridge Analytica. In addition to paying $4.9 billion to the Federal Trade Commission in a settlement, the social network has just agreed to pay $725 million to settle a long-running class-action lawsuit, making it the biggest settlement ever in a privacy case.

To recap, a whistleblower revealed in 2018 that now-defunct British political consulting firm Cambridge Analytica harvested the personal data of almost 90 million users without their consent for targeted political ads during the 2016 US presidential campaign and the UK’s Brexit referendum.

The controversy led to Mark Zuckerberg testifying before congress, a $4.9 billion fine levied on the company by the FTC in July 2019, and a $100 million settlement with the US Securities and Exchange Commission. There was also a class-action lawsuit filed in 2018 on behalf of Facebook users who alleged the company violated consumer privacy laws by sharing private data with other firms.

Facebook parent Meta settled the class action in August, thereby ensuring CEO Mark Zuckerberg, chief operating officer Javier Oliva, and former COO Sheryl Sandberg avoided hours of questioning from lawyers while under oath

[…]

This doesn’t mark the end of Meta’s dealings with the Cambridge Analytica fallout. Zuckerberg is facing a lawsuit from Washington DC’s attorney general Karl A. Racine over allegations that the Meta boss was personally involved in failures that led to the incident and his “policies enabled a multi-year effort to mislead users about the extent of Facebook’s wrongful conduct.”

Source: Meta agrees to $725 million settlement in Cambridge Analytica class-action lawsuit | TechSpot

OpenAI releases Point-E, an AI that generates 3D point clouds / meshes

[…] This week, OpenAI open sourced Point-E, a machine learning system that creates a 3D object given a text prompt. According to a paper published alongside the code base, Point-E can produce 3D models in one to two minutes on a single Nvidia V100 GPU.

[…]

Outside of the mesh-generating model, which stands alone, Point-E consists of two models: a text-to-image model and an image-to-3D model. The text-to-image model, similar to generative art systems like OpenAI’s own DALL-E 2 and Stable Diffusion, was trained on labeled images to understand the associations between words and visual concepts. The image-to-3D model, on the other hand, was fed a set of images paired with 3D objects so that it learned to effectively translate between the two.

When given a text prompt — for example, “a 3D printable gear, a single gear 3 inches in diameter and half inch thick” — Point-E’s text-to-image model generates a synthetic rendered object that’s fed to the image-to-3D model, which then generates a point cloud.

After training the models on a dataset of “several million” 3D objects and associated metadata, Point-E could produce colored point clouds that frequently matched text prompts, the OpenAI researchers say. It’s not perfect — Point-E’s image-to-3D model sometimes fails to understand the image from the text-to-image model, resulting in a shape that doesn’t match the text prompt.

[…]

Earlier this year, Google released DreamFusion, an expanded version of Dream Fields, a generative 3D system that the company unveiled back in 2021. Unlike Dream Fields, DreamFusion requires no prior training, meaning that it can generate 3D representations of objects without 3D data.

[…]

Source: OpenAI releases Point-E, an AI that generates 3D models | TechCrunch

The Copyright Industry Is About To Discover That There Are Hundreds Of Thousands Of Songs Generated By AI Already Available, Already Popular

ou may have noticed the world getting excited about the capabilities of ChatGPT, a text-based AI chat bot. Similarly, some are getting quite worked up over generative AI systems that can turn text prompts into images, including those mimicking the style of particular artists. But less remarked upon is the use of AI in the world of music. Music Business Worldwide has written two detailed news stories on the topic. The first comes from China:

Tencent Music Entertainment (TME) says that it has created and released over 1,000 tracks containing vocals created by AI tech that mimics the human voice.

And get this: one of these tracks has already surpassed 100 million streams.

Some of these songs use synthetic voices based on human singers, both dead and alive:

TME also confirmed today (November 15) that – in addition to “paying tribute” to the vocals of dead artists via the Lingyin Engine – it has also created “an AI singer lineup with the voices of trending [i.e currently active] stars such as Yang Chaoyue, among others”.

The copyright industry will doubtless have something to say about that. It is also unlikely to be delighted by the second Music Business Worldwide story about AI-generated music, this time in the Middle East and North Africa (MENA) market:

MENA-focused Spotify rival, Anghami, is now taking the concept to a whole other level – claiming that it will soon become the first platform to host over 200,000 songs generated by AI.

Anghami has partnered with a generative music platform called Mubert, which says it allows users to create “unique soundtracks” for various uses such as social media, presentations or films using one million samples from over 4,000 musicians.

According to Mohammed Ogaily, VP Product at Anghami, the service has already “generated over 170,000 songs, based on three sets of lyrics, three talents, and 2,000 tracks generated by AI”.

It’s striking that the undoubtedly interesting but theoretical possibilities of ChatGPT and generative AI art are dominating the headlines, while we hear relatively little about these AI-based music services that are already up and running, and hugely popular with listeners. It’s probably a result of the generally parochial nature of mainstream Western media, which often ignores the important developments happening elsewhere.

Source: The Copyright Industry Is About To Discover That There Are Hundreds Of Thousands Of Songs Generated By AI Already Available, Already Popular | Techdirt

AI-Created Comic Has Copyright Protection Revoked by US

The United States Copyright Office (USCO) reversed an earlier decision to grant a copyright to a comic book that was created using “A.I. art,” and announced that the copyright protection on the comic book will be revoked, stating that copyrighted works must be created by humans to gain official copyright protection.

In September, Kris Kashtanova announced that they had received a U.S. copyright on his comic book, Zarya of the Dawn, a comic book inspired by their late grandmother that she created with the text-to-image engine Midjourney. Kashtanova referred to herself as a “prompt engineer” and explained at the time that she went to get the copyright so that she could “make a case that we do own copyright when we make something using AI.”

[…]

Source: AI-Created Comic Has Been Deemed Ineligible for Copyright Protection

I guess there is no big corporate interest in lobbying for AI created content – yet – and so the copyright masters have no idea what to do without their corporate cash carrying masters telling them what to do.

ChatGPT Is a ‘Code Red’ for Google’s Search Business

A new wave of chat bots like ChatGPT use artificial intelligence that could reinvent or even replace the traditional internet search engine. From a report: Over the past three decades, a handful of products like Netscape’s web browser, Google’s search engine and Apple’s iPhone have truly upended the tech industry and made what came before them look like lumbering dinosaurs. Three weeks ago, an experimental chat bot called ChatGPT made its case to be the industry’s next big disrupter. […] Although ChatGPT still has plenty of room for improvement, its release led Google’s management to declare a “code red.” For Google, this was akin to pulling the fire alarm. Some fear the company may be approaching a moment that the biggest Silicon Valley outfits dread — the arrival of an enormous technological change that could upend the business.

For more than 20 years, the Google search engine has served as the world’s primary gateway to the internet. But with a new kind of chat bot technology poised to reinvent or even replace traditional search engines, Google could face the first serious threat to its main search business. One Google executive described the efforts as make or break for Google’s future. ChatGPT was released by an aggressive research lab called OpenAI, and Google is among the many other companies, labs and researchers that have helped build this technology. But experts believe the tech giant could struggle to compete with the newer, smaller companies developing these chat bots, because of the many ways the technology could damage its business.

Source: ChatGPT Is a ‘Code Red’ for Google’s Search Business – Slashdot

FBI warns of fake shopping sites – recommends to use an ad blocker

The FBI is warning the public that cyber criminals are using search engine advertisement services to impersonate brands and direct users to malicious sites that host ransomware and steal login credentials and other financial information.

[…]

Cyber criminals purchase advertisements that appear within internet search results using a domain that is similar to an actual business or service. When a user searches for that business or service, these advertisements appear at the very top of search results with minimum distinction between an advertisement and an actual search result. These advertisements link to a webpage that looks identical to the impersonated business’s official webpage.

[…]

The FBI recommends individuals take the following precautions:

  • Before clicking on an advertisement, check the URL to make sure the site is authentic. A malicious domain name may be similar to the intended URL but with typos or a misplaced letter.
  • Rather than search for a business or financial institution, type the business’s URL into an internet browser’s address bar to access the official website directly.
  • Use an ad blocking extension when performing internet searches. Most internet browsers allow a user to add extensions, including extensions that block advertisements. These ad blockers can be turned on and off within a browser to permit advertisements on certain websites while blocking advertisements on others.

The FBI recommends businesses take the following precautions:

  • Use domain protection services to notify businesses when similar domains are registered to prevent domain spoofing.
  • Educate users about spoofed websites and the importance of confirming destination URLs are correct.
  • Educate users about where to find legitimate downloads for programs provided by the business.

Source: Internet Crime Complaint Center (IC3) | Cyber Criminals Impersonating Brands Using Search Engine Advertisement Services to Defraud Users

For Firefox you have uBlock Origin or NoScript / Disconnect / Facebook Container / Privacy Badger / Ghostery / Super Agent / LocalCDN – you can run them all at once, but will have to sometimes whitelist certain sites just to get them to work. It’s a bit of trouble but internet will look much better being mainly ad free.

LastPass admits attackers copied password vaults

Password locker LastPass has warned customers that the August 2022 attack on its systems saw unknown parties copy encrypted files that contains the passwords to their accounts.

In a December 22nd update to its advice about the incident, LastPass brings customers up to date by explaining that the August 2022 attack saw “some source code and technical information were stolen from our development environment and used to target another employee, obtaining credentials and keys which were used to access and decrypt some storage volumes within the cloud-based storage service.”

Those creds allowed the attacker to copy information “that contained basic customer account information and related metadata including company names, end-user names, billing addresses, email addresses, telephone numbers, and the IP addresses from which customers were accessing the LastPass service.”

The update reveals that the attacker also copied “customer vault” data – the file LastPass uses to let customers record their passwords.

That file “is stored in a proprietary binary format that contains both unencrypted data, such as website URLs, as well as fully-encrypted sensitive fields such as website usernames and passwords, secure notes, and form-filled data.”

Which means the attackers have users’ passwords. But thankfully those passwords are encrypted with “256-bit AES encryption and can only be decrypted with a unique encryption key derived from each user’s master password”.

LastPass’ advice is that even though attackers have that file, customers who use its default settings have nothing to do as a result of this update as “it would take millions of years to guess your master password using generally-available password-cracking technology.”

One of those default settings is not to re-use the master password that is required to log into LastPass. The outfit suggests you make it a complex credential and use that password for just one thing: accessing LastPass.

Yet we know that users are often dumfoundingly lax at choosing good passwords, while two thirds re-use passwords even though they should know better.

[…]

LastPass therefore offered the following advice to individual and business users:

If your master password does not make use of the defaults above, then it would significantly reduce the number of attempts needed to guess it correctly. In this case, as an extra security measure, you should consider minimizing risk by changing passwords of websites you have stored.

Enjoy changing all those passwords, dear reader.

LastPass’s update concludes with news it decommissioned the systems breached in August 2022 and has built new infrastructure that adds extra protections.

Source: LastPass admits attackers copied password vaults