The Linkielist

Linking ideas with the world

The Linkielist

TikTok reveals details of how its algorithm works

TikTok Wednesday revealed some of the elusive workings of the prized algorithm that keeps hundreds of millions of users worldwide hooked on the viral video app.

[…]

TikTok’s algorithm uses machine learning to determine what content a user is most likely to engage with and serve them more of it, by finding videos that are similar or that are liked by people with similar user preferences.

  • When users open TikTok for the first time, they are shown 8 popular videos featuring different trends, music, and topics. After that, the algorithm will continue to serve the user new iterations of 8 videos based on which videos the user engages with and what the user does.
  • The algorithm identifies similar videos to those that have engaged a user based on video information, which could include details like captions, hashtags or sounds. Recommendations also take into account user device and account settings, which include data like language preference, country setting, and device type.
  • Once TikTok collects enough data about the user, the app is able to map a user’s preferences in relation to similar users and group them into “clusters.” Simultaneously, it also groups videos into “clusters” based on similar themes, like “basketball” or “bunnies.”
  • Using machine learning, the algorithm serves videos to users based on their proximity to other clusters of users and content that they like.
  • TikTok’s logic aims to avoid redundancies that could bore the user, like seeing multiple videos with the same music or from the same creator.

Yes, but: TikTok concedes that its ability to nail users’ preferences so effectively means that its algorithm can produce “filter bubbles,” reinforcing users’ existing preferences rather than showing them more varied content, widening their horizons, or offering them opposing viewpoints.

  • The company says that it’s studying filter bubbles, including how long they last and how a user encounters them, to get better at breaking them when necessary.
  • Since filter bubbles can reinforce conspiracy theories, hoaxes and other misinformation, TikTok’s product and policy teams study which accounts and video information — themes, hashtags, captions, and so on — might be linked to misinformation.
  • Videos or creators linked to misinformation are sent to the company’s global content reviewers so they can be managed before they are distributed to users on the main feed, which is called the “For You” page.

The briefing also featured updates about TikTok’s data, privacy and security practices.

  • The company says it tries to triage and prevent incidents on its platform before they happen by working to detect patterns of problems before they spread.
  • TikTok’s chief security officer, Roland Cloutier, said it plans to hire more than 100 data, security and privacy experts by year’s end in the U.S.
  • He also said that the company will be building a monitoring, response and investigative response center in Washington D.C. to actively detect and respond to critical incidents in real time.

The big picture: Beckerman says that TikTok’s transparency efforts are meant to position the company as a leader in Silicon Valley.

  • “We want to take a leadership position and show more about how the app works,” he said. “For us, we’re new, and we want to do this because we don’t have anything to hide. The more we’re talking to and meeting with lawmakers, the more comfortable they are with the product. That’s the way it should be.”

Source: TikTok reveals details of how its coveted algorithm works – Axios

These students figured out their tests were graded by AI — and the easy way to cheat – The Verge

Simmons, who is a history professor herself. Then, Lazare clarified that he’d received his grade less than a second after submitting his answers. A teacher couldn’t have read his response in that time, Simmons knew — her son was being graded by an algorithm.

Simmons watched Lazare complete more assignments. She looked at the correct answers, which Edgenuity revealed at the end. She surmised that Edgenuity’s AI was scanning for specific keywords that it expected to see in students’ answers. And she decided to game it.

[…]

Now, for every short-answer question, Lazare writes two long sentences followed by a disjointed list of keywords — anything that seems relevant to the question. “The questions are things like… ‘What was the advantage of Constantinople’s location for the power of the Byzantine empire,’” Simmons says. “So you go through, okay, what are the possible keywords that are associated with this? Wealth, caravan, ship, India, China, Middle East, he just threw all of those words in.”

“I wanted to game it because I felt like it was an easy way to get a good grade,” Lazare told The Verge. He usually digs the keywords out of the article or video the question is based on.

Apparently, that “word salad” is enough to get a perfect grade on any short-answer question in an Edgenuity test.

Edgenuity didn’t respond to repeated requests for comment, but the company’s online help center suggests this may be by design. According to the website, answers to certain questions receive 0% if they include no keywords, and 100% if they include at least one. Other questions earn a certain percentage based on the number of keywords included.

[…]

One student, who told me he wouldn’t have passed his Algebra 2 class without the exploit, said he’s been able to find lists of the exact keywords or sample answers that his short-answer questions are looking for — he says you can find them online “nine times out of ten.” Rather than listing out the terms he finds, though, he tried to work three into each of his answers. (“Any good cheater doesn’t aim for a perfect score,” he explained.)

Source: These students figured out their tests were graded by AI — and the easy way to cheat – The Verge

Brain-Computer Interfaces: U.S. Military Applications and Implications, An Initial Assessment

The U.S. Department of Defense (DoD) has invested in the development of technologies that allow the human brain to communicate directly with machines, including the development of implantable neural interfaces able to transfer data between the human brain and the digital world. This technology, known as brain-computer interface (BCI), may eventually be used to monitor a soldier’s cognitive workload, control a drone swarm, or link with a prosthetic, among other examples. Further technological advances could support human-machine decisionmaking, human-to-human communication, system control, performance enhancement and monitoring, and training. However, numerous policy, safety, legal, and ethical issues should be evaluated before the technology is widely deployed. With this report, the authors developed a methodology for studying potential applications for emerging technology. This included developing a national security game to explore the use of BCI in combat scenarios; convening experts in military operations, human performance, and neurology to explore how the technology might affect military tactics, which aspects may be most beneficial, and which aspects might present risks; and offering recommendations to policymakers. The research assessed current and potential BCI applications for the military to ensure that the technology responds to actual needs, practical realities, and legal and ethical considerations.

Source: Brain-Computer Interfaces: U.S. Military Applications and Implications, An Initial Assessment | RAND

Visa Unveils More Powerful AI Tool That Approves or Denies Card Transactions

Visa Inc. said Wednesday it has developed a more advanced artificial intelligence system that can approve or decline credit and debit transactions on behalf of banks whose own networks are down.

The decision to approve or deny a transaction typically is made by the bank. But bank networks can crash because of natural disasters, buggy software or other reasons. Visa said its backup system will be available to banks who sign up for the service starting in October.

The technology is “an incredible first step in helping us reduce the impact of an outage,” said Rajat Taneja, president of technology for Visa. The financial services company is the largest U.S. card network, as measured both by the number of cards in circulation and by transactions.

The service, Smarter Stand-In Processing, uses a branch of AI called deep learning

[…]

Smarter STIP kicks in automatically if Visa’s network detects that the bank’s network is offline or unavailable.

The older version of STIP uses a rules-based machine learning model as the backup method to manage transactions for banks in the event of a network disruption. In this approach, Visa’s product team and the financial institution define the rules for the model to be able to determine whether a particular transaction should be approved.

“Although it was customized for different users, it was still not very precise,” said Carolina Barcenas, senior vice president and head of Visa Research.

Technologists don’t define rules for the Smarter STIP AI model. The new deep-learning model is more advanced because it is trained to sift through billions of data points of cardholder activity to define correlations on its own.

[…]

In tests, the deep-learning AI model was 95% accurate in mimicking the bank’s decision on whether to approve or decline a transaction, she said. The technology more than doubled the accuracy of the old method, according to the company. The two versions will continue to exist but the more advanced version will be available as a premium service for clients.

[…]

Source: Visa Unveils More Powerful AI Tool That Approves or Denies Card Transactions – WSJ

The Unforeseen Consequences of Artificial Intelligence (AI) on Society: A Systematic Review of Regulatory Gaps Generated by AI in the U.S. | RAND

AI’s growing catalog of applications and methods has the potential to profoundly affect public policy by generating instances where regulations are not adequate to confront the issues faced by society, also known as regulatory gaps.

The objective of this dissertation is to improve our understanding of how AI influences U.S. public policy. It systematically explores, for the first time, the role of AI in the generation of regulatory gaps. Specifically, it addresses two research questions:

  1. What U.S. regulatory gaps exist due to AI methods and applications?
  2. When looking across all of the gaps identified in the first research question, what trends and insights emerge that can help stakeholders plan for the future?

These questions are answered through a systematic review of four academic databases of literature in the hard and social sciences. Its implementation was guided by a protocol that initially identified 5,240 candidate articles. A screening process reduced this sample to 241 articles (published between 1976 and February of 2018) relevant to answering the research questions.

This dissertation contributes to the literature by adapting the work of Bennett-Moses and Calo to effectively characterize regulatory gaps caused by AI in the U.S. In addition, it finds that most gaps: do not require new regulation or the creation of governance frameworks for their resolution, are found at the federal and state levels of government, and AI applications are recognized more often than methods as their cause.

Source: The Unforeseen Consequences of Artificial Intelligence (AI) on Society: A Systematic Review of Regulatory Gaps Generated by AI in the U.S. | RAND

AI tracks drone pilot’s location through the small movements the drone makes

The minute details of rogue drone’s movements in the air may unwittingly reveal the drone pilot’s location—possibly enabling authorities to bring the drone down before, say, it has the opportunity to disrupt air traffic or cause an accident. And it’s possible without requiring expensive arrays of radio triangulation and signal-location antennas.

So says a team of Israeli researchers who have trained an AI drone-tracking algorithm to reveal the drone operator’s whereabouts, with a better than 80 per cent accuracy level. They are now investigating whether the algorithm can also uncover the pilot’s level of expertise and even possibly their identity.

[…]

Depending on the specific terrain at any given airport, a pilot operating a drone near a camouflaging patch of forest, for instance, might have an unobstructed view of the runway. But that location might also be a long distance away, possibly making the operator more prone to errors in precise tracking of the drone. Whereas a pilot operating nearer to the runway may not make those same tracking errors but may also have to contend with big blind spots because of their proximity to, say, a parking garage or control tower.

And in every case, he said, simple geometry could begin to reveal important clues about a pilot’s location, too. When a drone is far enough away, motion along a pilot’s line of sight can be harder for the pilot to detect than motion perpendicular to their line of sight. This also could become a significant factor in an AI algorithm working to discover pilot location from a particular drone flight pattern.

The sum total of these various terrain-specific and terrain-agnostic effects, then, could be a giant finger pointing to the operator. This AI application would also be unaffected by any relay towers or other signal spoofing mechanisms the pilot may have put in place.

Weiss said his group tested their drone tracking algorithm using Microsoft Research’s open source drone and autonomous vehicle simulator AirSim. The group presented their work-in-progress at the Fourth International Symposium on Cyber Security, Cryptology and Machine Learning at Ben-Gurion University earlier this month.

Their paper boasts a 73 per cent accuracy rate in discovering drone pilots’ locations. Weiss said that in the few weeks since publishing that result, they’ve now improved the accuracy rate to 83 per cent.

Now that the researchers have proved the algorithm’s concept, Weiss said, they’re hoping next to test it in real-world airport settings. “I’ve already been approached by people who have the flight permissions,” he said. “I am a university professor. I’m not a trained pilot. Now people that do have the facility to fly drones [can] run this physical experiment.”

Source: Attention Rogue Drone Pilots: AI Can See You! – IEEE Spectrum

Cognitive Radios Will Go Where No Deep-Space Mission Has Gone Before

Space seems empty and therefore the perfect environment for radio communications. Don’t let that fool you: There’s still plenty that can disrupt radio communications. Earth’s fluctuating ionosphere can impair a link between a satellite and a ground station. The materials of the antenna can be distorted as it heats and cools. And the near-vacuum of space is filled with low-level ambient radio emanations, known as cosmic noise, which come from distant quasars, the sun, and the center of our Milky Way galaxy. This noise also includes the cosmic microwave background radiation, a ghost of the big bang. Although faint, these cosmic sources can overwhelm a wireless signal over interplanetary distances.

Depending on a spacecraft’s mission, or even the particular phase of the mission, different link qualities may be desirable, such as maximizing data throughput, minimizing power usage, or ensuring that certain critical data gets through. To maintain connectivity, the communications system constantly needs to tailor its operations to the surrounding environment.

Imagine a group of astronauts on Mars. To connect to a ground station on Earth, they’ll rely on a relay satellite orbiting Mars. As the space environment changes and the planets move relative to one another, the radio settings on the ground station, the satellite orbiting Mars, and the Martian lander will need continual adjustments. The astronauts could wait 8 to 40 minutes—the duration of a round trip—for instructions from mission control on how to adjust the settings. A better alternative is to have the radios use neural networks to adjust their settings in real time. Neural networks maintain and optimize a radio’s ability to keep in contact, even under extreme conditions such as Martian orbit. Rather than waiting for a human on Earth to tell the radio how to adapt its systems—during which the commands may have already become outdated—a radio with a neural network can do it on the fly.

Such a device is called a cognitive radio. Its neural network autonomously senses the changes in its environment, adjusts its settings accordingly—and then, most important of all, learns from the experience. That means a cognitive radio can try out new configurations in new situations, which makes it more robust in unknown environments than a traditional radio would be. Cognitive radios are thus ideal for space communications, especially far beyond Earth orbit, where the environments are relatively unknown, human intervention is impossible, and maintaining connectivity is vital.

Worcester Polytechnic Institute and Penn State University, in cooperation with NASA, recently tested the first cognitive radios designed to operate in space and keep missions in contact with Earth. In our tests, even the most basic cognitive radios maintained a clear signal between the International Space Station (ISS) and the ground. We believe that with further research, more advanced, more capable cognitive radios can play an integral part in successful deep-space missions in the future, where there will be no margin for error.

Future crews to the moon and Mars will have more than enough to do collecting field samples, performing scientific experiments, conducting land surveys, and keeping their equipment in working order. Cognitive radios will free those crews from the onus of maintaining the communications link. Even more important is that cognitive radios will help ensure that an unexpected occurrence in deep space doesn’t sever the link, cutting the crew’s last tether to Earth, millions of kilometers away.

Cognitive radio as an idea was first proposed by Joseph Mitola III at the KTH Royal Institute of Technology, in Stockholm, in 1998. Since then, many cognitive radio projects have been undertaken, but most were limited in scope or tested just a part of a system. The most robust cognitive radios tested to date have been built by the U.S. Department of Defense.

When designing a traditional wireless communications system, engineers generally use mathematical models to represent the radio and the environment in which it will operate. The models try to describe how signals might reflect off buildings or propagate in humid air. But not even the best models can capture the complexity of a real environment.

A cognitive radio—and the neural network that makes it work—learns from the environment itself, rather than from a mathematical model. A neural network takes in data about the environment, such as what signal modulations are working best or what frequencies are propagating farthest, and processes that data to determine what the radio’s settings should be for an optimal link. The key feature of a neural network is that it can, over time, optimize the relationships between the inputs and the result. This process is known as training.

[…]

Source: Cognitive Radios Will Go Where No Deep-Space Mission Has Gone Before – IEEE Spectrum

Sick of AI engines scraping your pics for facial recognition? Fawkes breaks the AI for you

Researchers at the University of Chicago’s Sand Lab have developed a technique for tweaking photos of people so that they sabotage facial-recognition systems.

The project, named Fawkes in reference to the mask in the V for Vendetta graphic novel and film depicting 16th century failed assassin Guy Fawkes, is described in a paper scheduled for presentation in August at the USENIX Security Symposium 2020.

Fawkes consists of software that runs an algorithm designed to “cloak” photos so they mistrain facial recognition systems, rendering them ineffective at identifying the depicted person. These “cloaks,” which AI researchers refer to as perturbations, are claimed to be robust enough to survive subsequent blurring and image compression.

The paper [PDF], titled, “Fawkes: Protecting Privacy against Unauthorized Deep Learning Models,” is co-authored by Shawn Shan, Emily Wenger, Jiayun Zhang, Huiying Li, Haitao Zheng, and Ben Zhao, all with the University of Chicago.

“Our distortion or ‘cloaking’ algorithm takes the user’s photos and computes minimal perturbations that shift them significantly in the feature space of a facial recognition model (using real or synthetic images of a third party as a landmark),” the researchers explain in their paper. “Any facial recognition model trained using these images of the user learns an altered set of ‘features’ of what makes them look like them.”

Figure 16 from the Fawkes: Protecting Privacy against Unauthorized Deep Learning Models paper

Two examples from the paper showing how different levels of perturbation applied to original photos can derail a facial-recognition system so that future matches are unlikely or impossible … Click to enlarge. Credit: Shan et al.

The boffins claim their pixel scrambling scheme provides greater than 95 per cent protection, regardless of whether facial recognition systems get trained via transfer learning or from scratch. They also say it provides about 80 per cent protection when clean, “uncloaked” images leak and get added to the training mix alongside altered snapshots.

They claim 100 per cent success at avoiding facial recognition matches using Microsoft’s Azure Face API, Amazon Rekognition, and Face++. Their tests involve cloaking a set of face photos and providing them as training data, then running uncloaked test images of the same person against the mistrained model.

Fawkes differs from adversarial image attacks in that it tries to poison the AI model itself, so it can’t match people or their images to their cloaked depictions. Adversarial image attacks try to confuse a properly trained model with specific visual patterns.

The researchers have posted their Python code on GitHub, with instructions for users of Linux, macOS, and Windows. Interested individuals may wish to try cloaking publicly posted pictures of themselves so that if the snaps get scraped and used to train to a facial recognition system – as Clearview AI is said to have done – the pictures won’t be useful for identifying the people they depict.

Fawkes is similar in some respects to the recent Camera Adversaria project by Kieran Browne, Ben Swift, and Terhi Nurmikko-Fuller at Australian National University in Canberra.

Camera Adversia adds a pattern known as Perlin Noise to images that disrupts the ability of deep learning systems to classify images. Available as an Android app, a user could take a picture of, say, a pipe and it would not be a pipe to the classifier.

The researchers behind Fawkes say they’re working on macOS and Windows tools that make their system easier to use.

Source: Sick of AI engines scraping your pics for facial recognition? Here’s a way to Fawkes them right up • The Register

UNESCO launches worldwide online public consultation on the ethics of artificial intelligence

Today, UNESCO is launching a global online consultation on the ethics of artificial intelligence, to give everyone around the world the opportunity to participate in the work of its international group of experts on AI. This group has been charged with producing the first draft of a Recommendation on the Ethics of AI, to be submitted to UNESCO Member States for adoption in November 2021. If adopted, it will be the first global normative instrument to address the developments and applications of AI.

“It is crucial that as many people as possible take part in this consultation, so that voices from around the world can be heard during the drafting process for the first global normative instrument on the ethics of AI”, says Audrey Azoulay, Director-General of UNESCO.

Twenty-four renowned specialists with multidisciplinary expertise on the ethics of artificial intelligence have been tasked with producing a draft UNESCO Recommendation that takes into account the wide-ranging impacts of AI, including on the environment and the needs of the global south.

With this consultation, UNESCO is inviting civil society organizations, decision-makers, the general public, intergovernmental and non-governmental organizations, media representatives, the private sector, the scientific community and all other interested stakeholders to comment on the draft text before 31 July 2020.

UNESCO is convinced that that there is an urgent need for a global instrument on the ethics of AI to ensure that ethical, social and political issues can be adequately addressed both in times of peace and in extraordinary situations like the current global health crisis.

The UNESCO Recommendation is expected to define shared values and principles, and identify concrete policy measures on the ethics of AI. Its role will be to help Member States ensure that they uphold the fundamental rights of the UN Charter and of the Universal Declaration of Human Rights and that research, design, development, and deployment of AI systems take into account the well-being of humanity, the environment and sustainable development.

The final draft text will be presented for adoption by Member States during the 41st session of UNESCO’s General Conference in November 2021.

Source: UNESCO launches worldwide online public consultation on the ethics of artificial intelligence

AI helps drone swarms navigate through crowded, unfamiliar spaces

Drone swarms frequently fly outside for a reason: it’s difficult for the robotic fliers to navigate in tight spaces without hitting each other. Caltech researchers may have a way for those drones to fly indoors, however. They’ve developed a machine learning algorithm, Global-to-Local Safe Autonomy Synthesis (GLAS), that lets swarms navigate crowded, unmapped environments. The system works by giving each drone a degree of independence that lets it adapt to a changing environment.

Instead of relying on existing maps or the routes of every other drone in the swarm, GLAS has each machine learning how to navigate a given space on its own even as it coordinates with others. This decentralized model both helps the drones improvise and makes scaling the swarm easier, as the computing is spread across many robots.

An additional tracking controller, Neural-Swarm, helps the drones compensate for aerodynamic interactions, such as the downwash from a robot flying overhead. It’s already more reliable than a “commercial” controller that doesn’t account for aerodynamics, with far smaller tracking errors.

This could be useful for drone light shows, of course, but it could also help with more vital operations. Search and rescue drones could safely comb areas in packs, while self-driving cars could keep traffic jams and collisions to a minimum. It may take a while before there are implementations outside of the lab, but don’t be surprised if flocks of drones become relatively commonplace.

Source: AI helps drone swarms navigate through crowded, unfamiliar spaces | Engadget

Privacy watchdogs from the UK, Australia team up, snap on gloves to probe AI-for-cops creeeps Clearview

Following Canada’s lead earlier this week, privacy watchdogs in Britain and Australia today launched a joint investigation into how Clearview AI harvests and uses billions of images it scraped from the internet to train its facial-recognition algorithms.

The startup boasted it had collected a database packed with more than three billion photos downloaded from people’s public social media pages. That data helped train its facial-recognition software, which was then sold to law enforcement as a tool to identify potential suspects.

Cops can feed a snapshot of someone taken from, say, CCTV footage into Clearview’s software, which then attempts to identify the person by matching it up with images in its database. If there’s a positive match, the software links to that person’s relevant profiles on social media that may reveal personal details such as their name or where they live. It’s a way to translate previously unseen photos of someone’s face into an online handle so that person can be tracked down.

Now, the UK’s Information Commissioner (ICO) and the Office of the Australian Information Commissioner (OAIC) are collaborating to examine the New York-based upstart’s practices. The investigation will focus “on the company’s use of ‘scraped’ data and biometrics of individuals,” the ICO said in a statement.

“The investigation highlights the importance of enforcement cooperation in protecting the personal information of Australian and UK citizens in a globalised data environment,” it added. “No further comment will be made while the investigation is ongoing.”

Source: Privacy watchdogs from the UK, Australia team up, snap on gloves to probe AI-for-cops upstart Clearview • The Register

Detroit cops employed facial recognition algos that only misidentifies suspects 96 per cent of the time

Cops in Detroit have admitted using facial-recognition technology that fails to accurately identify potential suspects a whopping 96 per cent of the time.

The revelation was made by the American police force’s chief James Craig during a public hearing, this week. Craig was grilled over the wrongful arrest of Robert Williams, who was mistaken as a shoplifter by facial-recognition software used by officers.

“If we would use the software only [to identify subjects], we would not solve the case 95-97 per cent of the time,” Craig said, Vice first reported. “That’s if we relied totally on the software, which would be against our current policy … If we were just to use the technology by itself, to identify someone, I would say 96 per cent of the time it would misidentify.”

The software was developed by DataWorks Plus, a biometric technology biz based in South Carolina. Multiple studies have demonstrated facial-recognition algorithms often struggle with identifying women and people with darker skin compared to Caucasian men.

Source: Detroit cops employed facial recognition algos that only misidentifies suspects 96 per cent of the time • The Register

New mathematical idea reins in AI bias towards making unethical and costly commercial choices

Researchers from the University of Warwick, Imperial College London, EPFL (Lausanne) and Sciteb Ltd have found a mathematical means of helping regulators and business manage and police Artificial Intelligence systems’ biases towards making unethical, and potentially very costly and damaging commercial choices—an ethical eye on AI.

Artificial intelligence (AI) is increasingly deployed in commercial situations. Consider for example using AI to set prices of insurance products to be sold to a particular customer. There are legitimate reasons for setting different prices for different people, but it may also be profitable to ‘game’ their psychology or willingness to shop around.

The AI has a vast number of potential strategies to choose from, but some are unethical and will incur not just moral cost but a significant potential economic penalty as stakeholders will apply some penalty if they find that such a strategy has been used—regulators may levy significant fines of billions of Dollars, Pounds or Euros and customers may boycott you—or both.

So in an environment in which decisions are increasingly made without , there is therefore a very strong incentive to know under what circumstances AI systems might adopt an unethical strategy and reduce that risk or eliminate entirely if possible.

Mathematicians and statisticians from University of Warwick, Imperial, EPFL and Sciteb Ltd have come together to help business and regulators creating a new “Unethical Optimization Principle” and provide a simple formula to estimate its impact. They have laid out the full details in a paper bearing the name “An unethical optimization principle”, published in Royal Society Open Science on Wednesday 1st July 2020.

The four authors of the paper are Nicholas Beale of Sciteb Ltd; Heather Battey of the Department of Mathematics, Imperial College London; Anthony C. Davison of the Institute of Mathematics, Ecole Polytechnique Fédérale de Lausanne; and Professor Robert MacKay of the Mathematics Institute of the University of Warwick.

Professor Robert MacKay of the Mathematics Institute of the University of Warwick said:

“Our suggested ‘Unethical Optimization Principle’ can be used to help regulators, compliance staff and others to find problematic strategies that might be hidden in a large strategy space. Optimisation can be expected to choose disproportionately many unethical strategies, inspection of which should show where problems are likely to arise and thus suggest how the AI search algorithm should be modified to avoid them in future.

“The Principle also suggests that it may be necessary to re-think the way AI operates in very large spaces, so that unethical outcomes are explicitly rejected in the optimization/learning process.”


Explore further

COVID-19 vaccine development: New guidelines for ethical approach to infecting trial volunteers


More information: An Unethical Optimization Principle, Royal Society Open Science (2020). URL after publication: royalsocietypublishing.org/doi/10.1098/rsos.200462

Source: New mathematical idea reins in AI bias towards making unethical and costly commercial choices

Burger King Is Leveraging Tesla Autopilot’s Confusion To Sell Whoppers

the Monarch of Meat announced a campaign that takes advantage of some sloppy sign recognition in the Tesla Autopilot’s Traffic Light and Stop Sign control, specifically in instances where the Tesla confuses a Burger King sign for a stop sign (maybe a “traffic control” sign?) and proceeds to stop the car, leaving the occupants of the car in a great position to consume some Whoppers.

The confusion was first noted by a Tesla Model 3 owner who has confusingly sawed the top off his steering wheel, for some reason, and uploaded a video of the car confusing the Burger King sign for a stop sign.

Burger King’s crack marketing team managed to arrange to use the video in this ad, and built a short promotion around it:

Did you see what I was talking about with that steering wheel? I guess the owner just thought it looked Batmobile-cool, or something? It’s also worth noting that is seems that the car’s map display has been modified, likely to remove any Tesla branding and obscure the actual location:

Illustration for article titled Burger King Is Leveraging Tesla Autopilots Confusion To Sell Whoppers

The promotion, which Burger King is using the #autopilotwhopper hashtag to promote, was only good for June 23rd, when they’d give you a free Whopper if you met the following conditions:

To qualify for the Promotion, guest must share a picture or video on Twitter, Facebook or Twitter with guest’s smart car outside a BK restaurant using #autopilotwhopper and #freewhopper.

Guests who complete step #3 will receive a direct message, within 24 hours of posting the picture/video, with a unique code for a Free Whopper sandwich (“Coupon”). Limit one Coupon per account.

It seems Burger King is using the phrase “smart car” to refer to any car that has some sort of Level 2 semi-autonomous driver’s assistance system that can identify signs, but the use of the “autopilot” in the hashtag and the original video make it clear that Teslas are the targeted cars here.

Source: Burger King Is Leveraging Tesla Autopilot’s Confusion To Sell Whoppers

How to jam neural networks

Sponge Examples: Energy-Latency Attacks on Neural Networks shows how to find adversarial examples that cause a DNN to burn more energy, take more time, or both. They affect a wide range of DNN applications, from image recognition to natural language processing (NLP). Adversaries might use these examples for all sorts of mischief – from draining mobile phone batteries, though degrading the machine-vision systems on which self-driving cars rely, to jamming cognitive radar.

So far, our most spectacular results are against NLP systems. By feeding them confusing inputs we can slow them down over 100 times. There are already examples in the real world where people pause or stumble when asked hard questions but we now have a dependable method for generating such examples automatically and at scale. We can also neutralize the performance improvements of accelerators for computer vision tasks, and make them operate on their worst case performance.

One implication is that engineers designing real-time systems that use machine learning will have to pay more attention to worst-case behaviour; another is that when custom chips used to accelerate neural network computations use optimisations that increase the gap between worst-case and average-case outcomes, you’d better pay even more attention.

Source: How to jam neural networks | Light Blue Touchpaper

OpenAI GPT-2 creates credible texts from minimal input

We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training.

Our model, called GPT-2 (a successor to GPT), was trained simply to predict the next word in 40GB of Internet text. Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.

[…]

GPT-2 displays a broad set of capabilities, including the ability to generate conditional synthetic text samples of unprecedented quality, where we prime the model with an input and have it generate a lengthy continuation. In addition, GPT-2 outperforms other language models trained on specific domains (like Wikipedia, news, or books) without needing to use these domain-specific training datasets. On language tasks like question answering, reading comprehension, summarization, and translation, GPT-2 begins to learn these tasks from the raw text, using no task-specific training data. While scores on these downstream tasks are far from state-of-the-art, they suggest that the tasks can benefit from unsupervised techniques, given sufficient (unlabeled) data and compute.

Samples

GPT-2 generates synthetic text samples in response to the model being primed with an arbitrary input. The model is chameleon-like—it adapts to the style and content of the conditioning text. This allows the user to generate realistic and coherent continuations about a topic of their choosing, as seen by the following select samples

Source: Better Language Models and Their Implications

Depixelizing Video Game Characters using AI Creates Monsters

A new digital tool built to depixelize photos sounds scary and bad. Another way to remove privacy from the world. But this tool is also being used for a sillier and not terrible purpose: Depixelizng old game characters. The results are…nevermind, this is also a terrible use of this tool.

“Face Depixelizer” is a tool Created by Alex Damian, Sachit Menon, and Denis Malimonov. It does exactly what you expect with a name like that. Users can upload a pixelated photo of a face and the tool spits out what that person might look like based on algorithms and all that stuff. In the wrong hands, this type of tech can be used to do some bad shit and will make it harder to hide in this world from police and other powerful and dangerous groups.

But it can also be used to create monsters out of old game characters. Look what this thing did to Mario, for example.

Illustration for article titled Depixelizing Video Game Characters Creates Monsters
Screenshot: Twitter

Steve from Minecraft turns into a dude who doesn’t wear a mask because “It’s all a hoax dude.”

Illustration for article titled Depixelizing Video Game Characters Creates Monsters
Screenshot: Twitter

Guybrush changed quite a bit and also grew weirdly disturbing hair…

Illustration for article titled Depixelizing Video Game Characters Creates Monsters
Screenshot: Twitter

These might be strange or even a bit monstrous, but things start getting much worse when you feed the tool images that don’t look like people at all. For example, this is what someone got after uploading an image of a Cacodemon from Doom.

Illustration for article titled Depixelizing Video Game Characters Creates Monsters
Screenshot: Twitter

Poor Peppy turns into a demon from a horror film.

Illustration for article titled Depixelizing Video Game Characters Creates Monsters
Screenshot: Twitter

And the Creeper from Minecraft somehow becomes even scarier.

Illustration for article titled Depixelizing Video Game Characters Creates Monsters
Screenshot: Twitter

There’s a bunch more in this thread. There’s also a bunch of Tweets all about uploading Black people’s faces and learning that the tool isn’t great at dealing with them. Almost seems like you should have diverse teams working on tech projects so as to not overlook a small detail like an entire group of people. Though in this case, I’m fine with the creators screwing up.

Maybe if people keep uploading video game images to tools like this we can eventually make them worthless.

Source: Depixelizing Video Game Characters Creates Monsters

Machine-learning models trained on pre-COVID data are now completely out of whack, says Gartner

Machine learning models built for doing business prior to the COVID-19 pandemic will no longer be valid as economies emerge from lockdowns, presenting companies with new challenges in machine learning and enterprise data management, according to Gartner.

The research group has reported that “the extreme disruption in the aftermath of COVID-19… has invalidated many models that are based on historical data.”

Organisations commonly using machine learning for product recommendation engines or next-best-offer, for example, will have to rethink their approach. They need to broaden their machine learning techniques as there is not enough post-COVID-19 data to retrain supervised machine learning models.

Advanced modelling techniques can help

In any case the ‘new normal’ is still emerging, making the validity of prediction models a challenge, said Rita Sallam, distinguished research vice president at Gartner.

“It’s a lot harder to just say those models based on typical data that happened prior to the COVID-19 outbreak, or even data that happened during the pandemic, will be valid. Essentially what we’re seeing is [a] complete shift in many ways in customer expectations, in their buying patterns. Old processing, products, customer needs and wants, and even business models are being replaced. Organisations have to replace them at a pace that is just unprecedented,” she said.

Source: Machine-learning models trained on pre-COVID data are now completely out of whack, says Gartner • The Register

Teaching physics to neural networks removes ‘chaos blindness’

a can be trained to identify photos of dogs by sifting through a large number of photos, making a guess about whether the photo is of a dog, seeing how far off it is and then adjusting its weights and biases until they are closer to reality.

The drawback to this is something called “ blindness”—an inability to predict or respond to chaos in a system. Conventional AI is chaos blind. But researchers from NC State’s Nonlinear Artificial Intelligence Laboratory (NAIL) have found that incorporating a Hamiltonian function into neural networks better enables them to “see” chaos within a system and adapt accordingly.

Simply put, the Hamiltonian embodies the complete information about a dynamic physical system—the total amount of all the energies present, kinetic and potential. Picture a swinging pendulum, moving back and forth in space over time. Now look at a snapshot of that pendulum. The snapshot cannot tell you where that pendulum is in its arc or where it is going next. Conventional neural networks operate from a snapshot of the pendulum. Neural networks familiar with the Hamiltonian flow understand the entirety of the pendulum’s movement—where it is, where it will or could be, and the energies involved in its movement.

In a proof-of-concept project, the NAIL team incorporated Hamiltonian structure into neural networks, then applied them to a known model of stellar and called the Hénon-Heiles model. The Hamiltonian neural network accurately predicted the dynamics of the system, even as it moved between order and chaos.

“The Hamiltonian is really the ‘special sauce’ that gives neural networks the ability to learn order and chaos,” says John Lindner, visiting researcher at NAIL, professor of physics at The College of Wooster and corresponding author of a paper describing the work. “With the Hamiltonian, the neural network understands underlying dynamics in a way that a conventional cannot. This is a first step toward physics-savvy neural networks that could help us solve hard problems.”

Source: Teaching physics to neural networks removes ‘chaos blindness’

More information: Anshul Choudhary et al, Physics-enhanced neural networks learn order and chaos, Physical Review E (2020). DOI: 10.1103/PhysRevE.101.062207

Journal information: Physical Review E

Researchers taught a robot to suture by showing it surgery videos

Stitching a patient back together after surgery is a vital but monotonous task for medics, often requiring them to repeat the same simple movements over and over hundreds of times. But thanks to a collaborative effort between Intel and the University of California, Berkeley, tomorrow’s surgeons could offload that grunt work to robots — like a macro, but for automated suturing.

The UC Berkeley team, led by Dr. Ajay Tanwani, has developed a semi-supervised AI deep-learning system, dubbed Motion2Vec. This system is designed to watch publically surgical videos performed by actual doctors, break down the medic’s movements when suturing (needle insertion, extraction and hand-off) and then mimic them with a high degree of accuracy.

“There’s a lot of appeal in learning from visual observations, compared to traditional interfaces for learning in a static way or learning from [mimicking] trajectories, because of the huge amount of information content available in existing videos,” Tanwani told Engadget. When it comes to teaching robots, a picture, apparently, is worth a thousand words.

“YouTube gets 500 hours of new material every minute. It’s an incredible repository, dataset,” Dr. Ken Goldberg, who runs the UC Berkeley lab and advised Tanwani’s team on this study, added. “Any human can watch almost any one of those videos and make sense of it, but a robot currently cannot — they just see it as a stream of pixels. So the goal of this work is to try and make sense of those pixels. That is to look at the video, analyze it, and… be able to segment the videos into meaningful sequences.”

To do this, the team leveraged a siamese network to train its AI. Siamese networks are built to learn the distance functions from unsupervised or weakly-supervised data, Tanwani explained. “The idea here is that you want to produce the high amount of data that is in recombinant videos and compress it into a low dimensional manifold,” he said. “Siamese networks are used to learn the distance functions within this manifold.”

Basically, these networks can rank the degree of similarity between two inputs, which is why they’re often used for image recognition tasks like matching surveillance footage of a person with their drivers license photo. In this case, however, the team is using the network to match the video input of what the manipulator arms are doing with the existing video of a human doctor making the same motions. The goal here being to raise the robot’s performance to near-human levels.

And since the system relies on a semi-supervised learning structure, the team needed just 78 videos from the JIGSAWS database to train their AI to perform its task with 85.5 percent segmentation accuracy and an average 0.94 centimeter error in targeting accuracy.

It’s going to be years before these sorts of technologies make their way to actual operating theaters but Tanwani believes that once they do, surgical AIs will act much like Driver Assist does on today’s semi-autonomous cars. They won’t replace human surgeons so much as augment their performance by taking over low-level, repetitive tasks. The Motion2Vec system isn’t just for suturing. Given proper training data, the AI could eventually be tasked with any of a number of duties, such as debridement (picking dead flesh and debris from a wound), but don’t expect it to perform your next appendectomy.

“We’re not there yet, but what we’re moving towards is the ability for a surgeon, who would be watching the system, indicate where they want a row of sutures, convey that they want six overhand sutures,” Goldberg said. “Then the robot would essentially start doing that and the surgeon would… be able to relax a little bit so that they could then be more rested and able to focus on more complex or nuanced parts of the surgery.”

“We believe that would help the surgeons productively focus their time in performing more complicated tasks,” Tanwani added, “and use technology to assist them in taking care of the mundane routine.”

Source: Researchers taught a robot to suture by showing it surgery videos | Engadget

‘DeepFaceDrawing’ AI can turn simple sketches into detailed photo portraits

Researchers have found a way to turn simple line drawings into photo-realistic facial images. Developed by a team at the Chinese Academy of Sciences in Beijing, DeepFaceDrawing uses artificial intelligence to help “users with little training in drawing to produce high-quality images from rough or even incomplete freehand sketches.”

This isn’t the first time we’ve seen tech like this (remember the horrifying results of Pix2Pix’s autofill tool?), but it is certainly the most advanced to date, and it doesn’t require the same level of detail in source sketches as previous iterations have. It works largely through probability — instead of requiring detailed eyelid or lip shapes, for example, the software refers to a database of faces and facial components, and considers how each facial element works with each other. Eyes, nose, mouth, face shape and hair type are all considered separately, and then assembled into a single image.

As the paper explains, “Recent deep image-to-image translation techniques allow fast generation of face images from freehand sketches. However, existing solutions tend to overfit to sketches, thus requiring professional sketches or even edge maps as input. To address this issue, our key idea is to implicitly model the shape space of plausible face images and synthesize a face image in this space to approximate an input sketch. Our method essentially uses input sketches as soft constraints and is thus able to produce high-quality face images even from rough and/or incomplete sketches.”

It’s not clear how the software will handle race. Of the 17,000 sketches and their corresponding photos created so far, the majority have been Caucasian and South American faces. This could be a result of the source data (bias is an ongoing problem in the world of AI), or down to the complexity of face shapes — the researchers don’t provide any further details.

In any case, the technology is due to go on show at this year’s (virtual) SIGGRAPH conference in July. According to the project’s website, code for the software is “coming soon,” which suggests we could see its application in the wild in the coming months — not only as a fun app to play around with, but also potentially in law enforcement, helping to rapidly generate images of suspects.

Source: ‘DeepFaceDrawing’ AI can turn simple sketches into detailed photo portraits | Engadget

Researchers Have Created a Tool That Can Perfectly Depixelate Faces

The typical approach to increasing the resolution of an image is to start with the low-res version and use intelligent algorithms to predict and add additional details and pixels in order to artificially generate a high-res version. But because a low-res version of an image can lack significant details, fine features are often lost in the process, resulting in, particularly with faces, an overly soft and smoothed out appearance in the results lacking fine details. The approach a team of researchers from Duke University has developed, called Pulse (Photo Upsampling via Latent Space Exploration), tackles the problem in an entirely different way by taking advantage of the startling progress made with machine learning in recent years.

The Pulse research team from Duke University demonstrating the results (the lower row of headshots) of Pulse processing a low-res image (the middle row of headshots) compared to the original (the top row of headshots) high-res photos.
The Pulse research team from Duke University demonstrating the results (the lower row of headshots) of Pulse processing a low-res image (the middle row of headshots) compared to the original (the top row of headshots) high-res photos.
Photo: Duke University

Pulse starts with a low-res image, but it doesn’t work with or process it directly. It instead uses it as a target reference for an AI-based face generator that relies on generative adversarial networks to randomly create realistic headshots. We’ve seen these tools used before in videos where thousands of non-existent but lifelike headshots are generated, but in this case, after the faces are created, they’re downsized to the resolution of the original low-res reference and compared it against it, looking for a match. It seems like an entirely random process that would take decades to find a high-res face that matches the original sample when it’s shrunk, but the process is able to quickly find a close comparison and then gradually tweak and adjust it until it produces a down-sampled result that matches the original low-res sample.

Source: Researchers Have Created a Tool That Can Perfectly Depixelate Faces

Trillions of Words Analyzed, OpenAI Sets Loose AI Language Colossus – The API

Over the past few months, OpenAI has vacuumed an incredible amount of data into its artificial intelligence language systems. It sucked up Wikipedia, a huge swath of the rest of the internet and tons of books. This mass of text – trillions of words – was then analyzed and manipulated by a supercomputer to create what the research group bills as a major AI breakthrough and the heart of its first commercial product, which came out on Thursday.

The product name — OpenAI calls it “the API” — might not be magical, but the things it can accomplish do seem to border on wizardry at times. The software can perform a broad set of language tasks, including translating between languages, writing news stories and poems and answering everyday questions. Ask it, for example, if you should keep reading a story, and you might be told, “Definitely. The twists and turns keep coming.”

OpenAI wants to build the most flexible, general purpose AI language system of all time. Typically, companies and researchers will tune their AI systems to handle one, limited task. The API, by contrast, can crank away at a broad set of jobs and, in many cases, at levels comparable with specialized systems. While the product is in a limited test phase right now, it will be released broadly as something that other companies can use at the heart of their own offerings such as customer support chat systems, education products or games, OpenAI Chief Executive Officer Sam Altman said.

[…]

Software developers can begin training the AI system just by showing it a few examples of what they want the code to do. If you ask it a number of questions in a row, for example, the system starts to sense it’s in question-and-answer mode and tweaks its responses accordingly. There are also tools that let you alter how literal or creative you want the AI to be.

But even a layperson – i.e. this reporter – can use the product. You can simply type text into a box, hit a button and get responses. Drop a couple paragraphs of a news story into the API, and it will try to complete the piece with results that vary from I-kinda-fear-for-my-job good to this-computer-might-be-on-drugs bad.

Source: Trillions of Words Analyzed, OpenAI Sets Loose AI Language Colossus – Bloomberg

deepart.io turns your picture into versions of existing art pictures

Artificial intelligence turning your photos into art

It uses the stylistic elements of one image to draw the content of another. Get your own artwork in just three steps.

  1. Upload photo

    The first picture defines the scene you would like to have painted.

  2. Choose style

    Choose among predefined styles or upload your own style image.

  3. Submit

    Our servers paint the image for you. You get an email when it’s done.

Source: deepart.io – become a digital artist