We show how a malicious learner can plant an undetectable backdoor into a classifier. On the surface, such a backdoored classifier behaves normally, but in reality, the learner maintains a mechanism for changing the classification of any input, with only a slight perturbation. Importantly, without the appropriate “backdoor key,” the mechanism is hidden and cannot be detected by any computationally-bounded observer. We demonstrate two frameworks for planting undetectable backdoors, with incomparable guarantees.•First, we show how to plant a backdoor in any model, using digital signature schemes. The construction guarantees that given query access to the original model and the backdoored version, it is computationally infeasible to find even a single input where they differ. This property implies that the backdoored model has generalization error comparable with the original model. Moreover, even if the distinguisher can request backdoored inputs of its choice, they cannot backdoor a new input—a property we call non-replicability.•Second, we demonstrate how to insert undetectable backdoors in models trained using the Random Fourier Features (RFF) learning paradigm (Rahimi, Recht; NeurIPS 2007). In this construction, undetectability holds against powerful white-box distinguishers: given a complete description of the network and the training data, no efficient distinguisher can guess whether the model is “clean” or contains a backdoor.
[…]
Our construction of undetectable backdoors also sheds light on the related issue of robustness to adversarial examples. In particular, by constructing undetectable backdoor for an “adversarially-robust” learning algorithm, we can produce a classifier that is indistinguishable from a robust classifier, but where every input has an adversarial example! In this way, the existence of undetectable backdoors represent a significant theoretical roadblock to certifying adversarial robustness.
[…] It is, in fact, possible to uncrop images and documents across a variety of work-related computer apps. Among the suites that include the ability are Google Workspace, Microsoft Office, and Adobe Acrobat.
Being able to uncrop images and documents poses risks for sources who may be under the impression that cropped materials don’t contain the original uncropped content.
One of the hazards lies in the fact that, for some of the programs, downstream crop reversals are possible for viewers or readers of the document, not just the file’s creators or editors. Official instruction manuals, help pages, and promotional materials may mention that cropping is reversible, but this documentation at times fails to note that these operations are reversible by any viewers of a given image or document.
For instance, while Google’s help page mentions that a cropped image may be reset to its original form, the instructions are addressed to the document owner. “If you want to undo the changes you’ve made to your photo,” the help page says, “reset an image back to its original photo.” The page doesn’t specify that if a reader is viewing a Google Doc someone else created and wants to undo the changes the editor made to a photo, the reader, too, can reset the image without having edit permissions for the document.
For users with viewer-only access permissions, right-clicking on an image doesn’t yield the option to “reset image.” In this situation, however, all one has to do is right-click on the image, select copy, and then paste the image into a new Google Doc. Right-clicking the pasted image in the new document will allow the reader to select “reset image.” (I’ve put together an example to show how the crop reversal works in this case.)
[…]
Uncropped versions of images can be preserved not just in Office apps, but also in a file’s own metadata. A photograph taken with a modern digital camera contains all types of metadata. Many image files record text-based metadata such as the camera make and model or the GPS coordinates at which the image was captured. Some photos also include binary data such as a thumbnail version of the original photo that may persist in the file’s metadata even after the photo has been edited in an image editor.
Images and photos are not the only digital files susceptible to uncropping: Some digital documents may also be uncropped. While Adobe Acrobat has a page-cropping tool, the instructions point out that “information is merely hidden, not discarded.” By manually setting the margins to zero, it is possible to restore previously cropped areas in a PDF file.
[…]
Images and documents should be thoroughly stripped of metadata using tools such as ExifTool and Dangerzone. Additionally, sensitive materials should not be edited through online tools, as the potential always exists for original copies of the uploaded materials to be preserved and revealed.
Soon, an Amazon corporate lawyer chimed in. She warned employees not to provide ChatGPT with “any Amazon confidential information (including Amazon code you are working on),” according to a screenshot of the message seen by Insider.
The attorney, a senior corporate counsel at Amazon, suggested employees follow the company’s existing conflict of interest and confidentiality policies because there have been “instances” of ChatGPT responses looking similar to internal Amazon data.
“This is important because your inputs may be used as training data for a further iteration of ChatGPT, and we wouldn’t want its output to include or resemble our confidential information (and I’ve already seen instances where its output closely matches existing material),” the lawyer wrote.
[…]
“OpenAI is far from transparent about how they use the data, but if it’s being folded into training data, I would expect corporations to wonder: After a few months of widespread use of ChatGPT, will it become possible to extract private corporate information with cleverly crafted prompts?” said Emily Bender, who teaches computational linguistics at University of Washington.
[…]
some Amazonians are already using the AI tool as a software “coding assistant” by asking it to improve internal lines of code, according to Slack messages seen by Insider.
[…]
For Amazon employees, data privacy seems to be the least of their concerns. They said using the chatbot at work has led to “10x in productivity,” and many expressed a desire to join internal teams developing similar services.
The number of exploding stars (supernovae) has significantly influenced marine life’s biodiversity during the last 500 million years. This is the essence of a new study published in Ecology and Evolution by Henrik Svensmark, DTU space.
Extensive studies of the fossil record have shown that the diversity of life forms has varied significantly over geological time, and a fundamental question of evolutionary biology is which processes are responsible for these variations.
The new study reveals a major surprise: The varying number of nearby exploding stars (supernovae) closely follows changes in marine genera (the taxonomic rank above species) biodiversity during the last 500 million years. The agreement appears after normalizing the marine diversity curve by the changes in shallow marine areas along the continental coasts.
Shallow marine shelves are relevant since most marine life lives in these areas, and changes in shelf areas open new regions where species can evolve. Therefore, changes in available shallow areas influence biodiversity.
“A possible explanation for the supernova-diversity link is that supernovae influence Earth’s climate,” says Henrik Svensmark, author of the paper and senior researcher at DTU Space.
“A high number of supernovae leads to a cold climate with a large temperature difference between the equator and polar regions. This results in stronger winds, ocean mixing, and transportation of life-essential nutrients to the surface waters along the continental shelves.”
Variations in relative supernova history (black curve) compared with genera-level diversity curves normalized with the area of shallow marine margins (shallow areas along the coasts). The brown and light green curves are major marine animals’ genera-level diversity. The orange is marine invertebrate genera-level diversity. Finally, the dark green curve is all marine animals’ genera-level diversity. Abbreviations for geological periods are Cm Cambrian, O Ordovician, S Silurian, D Devonian, C Carboniferous, P Permian, Tr Triassic, J Jurassic, K Cretaceous, Pg Palaeogene, Ng Neogene. Credit: Henrik Svensmark, DTU Space
The paper concludes that supernovae are vital for primary bioproductivity by influencing the transport of nutrients. Gross primary bioproductivity provides energy to the ecological systems, and speculations have suggested that changes in bioproductivity may influence biodiversity. The present results are in agreement with this hypothesis.
“The new evidence points to a connection between life on Earth and supernovae, mediated by the effect of cosmic rays on clouds and climate,” says Henrik Svensmark.
When heavy stars explode, they produce cosmic rays, which are elementary particles with enormous energies. Cosmic rays travel to our solar system, where some end their journey by colliding with Earth’s atmosphere. Previous studies by Henrik Svensmark and colleagues referenced below show that they become the primary source of ions help form and grow aerosols required in cloud formation.
Since clouds can regulate the solar energy reaching Earth’s surface, the cosmic-ray-aerosol-cloud influences climate. Evidence shows substantial climate shifts when the intensity of cosmic rays changes by several hundred percent over millions of years.
More information: Henrik Svensmark, A persistent influence of supernovae on biodiversity over the Phanerozoic, Ecology and Evolution (2023). DOI: 10.1002/ece3.9898
Henrik Svensmark, Supernova Rates and Burial of Organic Matter, Geophysical Research Letters (2022). DOI: 10.1029/2021GL096376
Svensmark, H. and Friis-Christensen, E., Variation of Cosmic Ray Flux and Global Cloud Coverage -A missing Link in Solar-Climate Relationships, Journal of Atmospheric and Terrestrial Physics, 59, 1225, (1997)
Nir J. Shaviv et al, The Phanerozoic climate, Annals of the New York Academy of Sciences (2022). DOI: 10.1111/nyas.14920
Henrik Svensmark, Evidence of nearby supernovae affecting life on Earth, Monthly Notices of the Royal Astronomical Society (2012). DOI: 10.1111/j.1365-2966.2012.20953.x
[…] As games grow bigger in scope, writers are facing the ratcheting challenge of keeping NPCs individually interesting and realistic. How do you keep each interaction with them – especially if there are hundreds of them – distinct? This is where Ghostwriter, an in-house AI tool created by Ubisoft’s R&D department, La Forge, comes in.
Ghostwriter isn’t replacing the video game writer, but instead, alleviating one of the video game writer’s most laborious tasks: writing barks. Ghostwriter effectively generates first drafts of barks – phrases or sounds made by NPCs during a triggered event – which gives scriptwriters more time to polish the narrative elsewhere. Ben Swanson, R&D Scientist at La Forge Montreal, is the creator of Ghostwriter, and remembers the early seeds of it ahead of his presentation of the tech at GDC this year.
[…]
Ghostwriter is the result of conversations with narrative designers who revealed a challenge, one that Ben identified could be solved with an AI tool. Crowd chatter and barks are central features of player immersion in games – NPCs speaking to each other, enemy dialogue during combat, or an exchange triggered when entering an area all provide a more realistic world experience and make the player feel like the game around them exists outside of their actions. However, both require time and creative effort from scriptwriters that could be spent on other core plot items. Ghostwriter frees up that time, but still allows the scriptwriters a degree of creative control.
“Rather than writing first draft versions themselves, Ghostwriter lets scriptwriters select and polish the samples generated,” Ben explains. This way, the tech is a tool used by the teams to support them in their creative journey, with every interaction and feedback originating from the members who use it.
As a summary of its process, scriptwriters first create a character and a type of interaction or utterance they would like to generate. Ghostwriter then proposes a select number of variations which the scriptwriter can then choose and edit freely to fit their needs. This process uses pairwise comparison as a method of evaluation and improvement. This means that, for each variation generated, Ghostwriter provides two choices which will be compared and chosen by the scriptwriter. Once one is selected, the tool learns from the preferred choice and, after thousands of selections made by humans, it becomes more effective and accurate.
[…]
The team’s ambition is to give this AI power to narrative designers, who will be able to eventually create their own AI system themselves, tailored to their own design needs. To do this, they created a user-friendly back-end tool website called Ernestine, which allows anyone to create their own machine learning models used in Ghostwriter. Their hope is that teams consider Ghostwriter before they start their narrative process and create their models with a vision in mind, effectively making the tech an integral part of the production pipeline.
Speed cameras have been around for a long time and so have dash cams. The uniquely devious idea of combining the two into a traffic hall monitor’s dream device was not a potential reality until recently, though. According to the British Royal Automobile Club, such a combination is coming soon. The app, which is reportedly available in the U.K. as soon as May, will allow drivers to report each other directly to the police with video evidence for things like running red lights, failure to use a blinker, distracted driving, and yes, speeding.
Its founder Oleksiy Afonin recently held meetings with police to discuss how it would work. In a nutshell, video evidence of a crime could be uploaded as soon as the driver who captured it stopped their vehicle to do so safely. According to the RAC, the footage could then be “submitted to the police through an official video portal in less than a minute.” Police reportedly were open to the idea of using the videos as evidence in court.
The RAC questioned whether such an app could be distracting. It certainly opens up a whole new world of crime reporting. In some cities, individuals can report poorly or illegally parked cars to traffic police. Drivers getting into the habit of reporting each other for speeding might be a slippery slope, though. The government would be happy to collect the ticket revenue but the number of citations for alleged speeding could be off the charts with such a system. Anybody can download the app and report someone else, but the evidence would need to be reviewed.
The app, called dashcamUK, will only be available in the United Kingdom, as its name indicates. Thankfully, it doesn’t seem like there are any plans to bring it Stateside. Considering the British public is far more open to the use of CCTV cameras in terms of recording crimes than Americans are, it will likely stay that way for that reason, among others.
In 2017, the DHS began quietly rolling out its facial recognition program, starting with international airports and aimed mainly at collecting/scanning people boarding international flights. Even in its infancy, the DHS was hinting this was never going to remain solely an international affair.
It made its domestic desires official shortly thereafter, with the TSA dropping its domestic surveillance “roadmap” which now included “expanding biometrics to additional domestic travelers.” Then the DHS and TSA ran silent for a bit, resurfacing in late 2022 with the news it was rolling out its facial recognition system at 16 domestic airports.
As of January, the DHS and TSA were still claiming this biometric ID verification system was strictly opt-in. A TSA rep interviewed by the Washington Post, however, hinted that opting out just meant subjecting yourself to the worst in TSA customer service. Given the options, more travelers would obviously prefer a less brusque/hands-y trip through security checkpoints, ensuring healthy participation in the TSA’s “optional” facial recognition program.
Speaking at an aviation security panel at South by Southwest, TSA Administrator David Pekoske made these comments:
“We’re upgrading our camera systems all the time, upgrading our lighting systems,” Pekoske said. “(We’re) upgrading our algorithms, so that we are using the very most advanced algorithms and technology we possibly can.”
He said passengers can also choose to opt out of certain screening processes if they are uncomfortable, for now. Eventually, biometrics won’t be optional, he said.
[…]
Pekoske buries the problematic aspects of biometric harvesting in exchange for domestic travel “privileges” by claiming this is all about making things better for passengers.
“It’s critically important that this system has as little friction as it possibly can, while we provide for safety and security,” Pekoske said.
Yes, you’ll get through screening a little faster. Unless the AI is wrong, in which case you’ll be dealing with a whole bunch of new problems most agents likely won’t have the expertise to handle.
[…]
More travelers. Fewer agents. And a whole bunch of screens to interact with. That’s the plan for the nation’s airports and everyone who passes through them.
Last month, Roblox outlined its vision for AI-assisted content creation, imagining a future where Generative AI could help users create code, 3D models and more with little more than text prompts. Now, it’s taking its first steps toward allowing “every user on Roblox to be a creator” by launching its first AI tools: Code Assist and Material Generator, both in beta.
Although neither tool is anywhere close to generating a playable Roblox experience from a text description, Head of Roblox Studio Stef Corazzatold an audience at GDC 2023 that they can “help automate basic coding tasks so you can focus on creative work.” For now, that means being able to generate useful code snippets and object textures based on short prompts. Roblox’s announcement for the tools offers a few examples, generating realistic textures for a “bright red rock canyon” and “stained glass,” or producing several lines of functional code that will that make certain objects change color and self-destruct after a player interacts with them.