Stitching a patient back together after surgery is a vital but monotonous task for medics, often requiring them to repeat the same simple movements over and over hundreds of times. But thanks to a collaborative effort between Intel and the University of California, Berkeley, tomorrow’s surgeons could offload that grunt work to robots — like a macro, but for automated suturing.
The UC Berkeley team, led by Dr. Ajay Tanwani, has developed a semi-supervised AI deep-learning system, dubbed Motion2Vec. This system is designed to watch publically surgical videos performed by actual doctors, break down the medic’s movements when suturing (needle insertion, extraction and hand-off) and then mimic them with a high degree of accuracy.
“There’s a lot of appeal in learning from visual observations, compared to traditional interfaces for learning in a static way or learning from [mimicking] trajectories, because of the huge amount of information content available in existing videos,” Tanwani told Engadget. When it comes to teaching robots, a picture, apparently, is worth a thousand words.
“YouTube gets 500 hours of new material every minute. It’s an incredible repository, dataset,” Dr. Ken Goldberg, who runs the UC Berkeley lab and advised Tanwani’s team on this study, added. “Any human can watch almost any one of those videos and make sense of it, but a robot currently cannot — they just see it as a stream of pixels. So the goal of this work is to try and make sense of those pixels. That is to look at the video, analyze it, and… be able to segment the videos into meaningful sequences.”
To do this, the team leveraged a siamese network to train its AI. Siamese networks are built to learn the distance functions from unsupervised or weakly-supervised data, Tanwani explained. “The idea here is that you want to produce the high amount of data that is in recombinant videos and compress it into a low dimensional manifold,” he said. “Siamese networks are used to learn the distance functions within this manifold.”
Basically, these networks can rank the degree of similarity between two inputs, which is why they’re often used for image recognition tasks like matching surveillance footage of a person with their drivers license photo. In this case, however, the team is using the network to match the video input of what the manipulator arms are doing with the existing video of a human doctor making the same motions. The goal here being to raise the robot’s performance to near-human levels.
And since the system relies on a semi-supervised learning structure, the team needed just 78 videos from the JIGSAWS database to train their AI to perform its task with 85.5 percent segmentation accuracy and an average 0.94 centimeter error in targeting accuracy.
It’s going to be years before these sorts of technologies make their way to actual operating theaters but Tanwani believes that once they do, surgical AIs will act much like Driver Assist does on today’s semi-autonomous cars. They won’t replace human surgeons so much as augment their performance by taking over low-level, repetitive tasks. The Motion2Vec system isn’t just for suturing. Given proper training data, the AI could eventually be tasked with any of a number of duties, such as debridement (picking dead flesh and debris from a wound), but don’t expect it to perform your next appendectomy.
“We’re not there yet, but what we’re moving towards is the ability for a surgeon, who would be watching the system, indicate where they want a row of sutures, convey that they want six overhand sutures,” Goldberg said. “Then the robot would essentially start doing that and the surgeon would… be able to relax a little bit so that they could then be more rested and able to focus on more complex or nuanced parts of the surgery.”
“We believe that would help the surgeons productively focus their time in performing more complicated tasks,” Tanwani added, “and use technology to assist them in taking care of the mundane routine.”
Researchers have found a way to turn simple line drawings into photo-realistic facial images. Developed by a team at the Chinese Academy of Sciences in Beijing, DeepFaceDrawing uses artificial intelligence to help “users with little training in drawing to produce high-quality images from rough or even incomplete freehand sketches.”
This isn’t the first time we’ve seen tech like this (remember the horrifying results of Pix2Pix’s autofill tool?), but it is certainly the most advanced to date, and it doesn’t require the same level of detail in source sketches as previous iterations have. It works largely through probability — instead of requiring detailed eyelid or lip shapes, for example, the software refers to a database of faces and facial components, and considers how each facial element works with each other. Eyes, nose, mouth, face shape and hair type are all considered separately, and then assembled into a single image.
As the paper explains, “Recent deep image-to-image translation techniques allow fast generation of face images from freehand sketches. However, existing solutions tend to overfit to sketches, thus requiring professional sketches or even edge maps as input. To address this issue, our key idea is to implicitly model the shape space of plausible face images and synthesize a face image in this space to approximate an input sketch. Our method essentially uses input sketches as soft constraints and is thus able to produce high-quality face images even from rough and/or incomplete sketches.”
It’s not clear how the software will handle race. Of the 17,000 sketches and their corresponding photos created so far, the majority have been Caucasian and South American faces. This could be a result of the source data (bias is an ongoing problem in the world of AI), or down to the complexity of face shapes — the researchers don’t provide any further details.
In any case, the technology is due to go on show at this year’s (virtual) SIGGRAPH conference in July. According to the project’s website, code for the software is “coming soon,” which suggests we could see its application in the wild in the coming months — not only as a fun app to play around with, but also potentially in law enforcement, helping to rapidly generate images of suspects.
Social media research group Graphika published today a 120-page report [PDF] unmasking a new Russian information operation of which very little has been known so far.
Codenamed Secondary Infektion, the group is different from the Internet Research Agency (IRA), the Sankt Petersburg company (troll farm) that has interfered in the US 2016 presidential election.
Graphika says this new and separate group has been operating since 2014 and has been relying on fake news articles, fake leaks, and forged documents to generate political scandals in countries across Europe and North America.
Graphika says that based on previous research, they’ve now tracked down more than 2,500 pieces of content the Secondary group Infektion has posted online since early 2014.
Image: Graphika
According to Graphika’s analysis, most of the group’s content has followed nine primary themes:
Ukraine as a failed state or unreliable partner
The United States and NATO as aggressive and interfering in other countries
Europe as weak and divided
Critics of the Russian government as morally corrupt, alcoholic, or otherwise mentally unstable
Muslims as aggressive invaders
The Russian government as the victim of Western hypocrisy or plots
Western elections as rigged and candidates who criticized the Kremlin as unelectable
Turkey as an aggressive and destabilizing state
World sporting bodies and competitions as unfair, unprofessional, and Russophobic
Graphika says that most of this content has been aimed at attacking classic Russian political rivals like Ukraine, the US, Poland, and Germany, but also other countries where Russian influence came under attack, at one point or another.
Graphika said the group didn’t publish only in English, but also adapted to each target and published content in its local language. In total, researchers found content posted in seven languages.
Image: Graphika
Unlike the IRA, which was primarily focused on creating division at the level of regular citizens, Secondary Infektion’s primary role appears to been to influence decisions at the highest level of foreign governments.
This was done by attempting to influence political decisions by creating fake narratives, pitting Western countries against each other, and by embarrassing anti-Russian politicians using fake articles and forged documents.
“The ‘leaks’ typically exposed some dramatic geopolitical scandal, such as a prominent Kremlin critic’s corrupt dealings or secret American plans to overthrow pro-Kremlin governments around the world,” the Graphika team said today.
The group had operations going during the US presidential elections in 2016, the French elections in 2017, and in Sweden in 2018, but election interferene was never the group’s primary target.
Graphika said the group “aimed to exacerbate divisions between countries, trying to set Poles against Germans, Germans against Americans, Americans against Britons, and absolutely everyone against Ukrainians.”
Secondary Infektion liked blogs more than social media
Furthermore, another way in which Secondary Infektion differed from the more well-known IRA was that while the IRA was mostly active on social media networks, the Secodanry Infektion gang had a broader reach, with a lot of its content being published on blogs and news sites.
Graphika said it found content published on more than 300 platforms, from social media giants such as Facebook, Twitter, YouTube, and Reddit to blogging platforms like WordPress and Medium, but also niche discussion forums in Pakistan and Australia.
Image: Graphika
Graphika researchers also said Secondary Infektion was more advanced than the IRA. Unlike the sloppy IRA operators who were easily traced back to an exact building in Sankt Petersburg, Russia, the mystery about Secondary Infektion’s real identity remains unsolved.
“[Secondary Infektion’s] identity is the single most pressing question to emerge from this study,” the Graphika team wrote in its report today.
Researchers said the group managed to keep its identity secret because they paid very close attention to operational security (OpSec). Graphika says Secondary Infektion agents employed single-use burner accounts for almost everything they posted online, abandoning each account in less than an hour after promoting their content.
This approach has made it more difficult for the group to build a dedicated audience but has allowed it to orchestrate high-impact operations for years, without giving away their infrastructure, modus operandi, and goals.
With its identity still a secret, the group is expected to continue operating and sowing conflict between Russia’s rivals.
Threat intel researchers have uncovered a phishing and malware campaign that targeted “a large European aerospace company” and which was run by the same North Koreans behind the hack of Sony Pictures.
While there are quite a few European aerospace firms, Slovakian infosec biz ESET was more concerned with the phishing ‘n’ malware campaign it detected on behalf of its unnamed client.
Branded “Operation Interception” by ESET, the researchers claimed the “highly targeted cyberattacks” were being spread by North Korean baddies Lazarus Group, who were behind the 2014 hack of Sony’s American entertainment business.
The threat group’s latest detected campaign involved targeting aerospace folk via LinkedIn, said the infoseccers. ESET researcher Jean-Ian Boutin explained: “In our case they were impersonating Collins Aerospace and General Dynamics (GD), two organisations in the same vertical as the targeted European organisations,”. He said the Norks were targeting people who worked in “sales, marketing, tech, general admin” roles.
Collins and GD are two of the bigger names in North American aerospace; among other things, Collins makes avionic instruments and software while GD has fingers in pies ranging from the F-16 fighter jet through Gulfstream corporate aircraft, US Navy submarines and armoured vehicles. As bait dangled before honest people hoping to take a major step forwards in an aerospace career, these two companies were tempting lures.
“The [job] offer seemed too good to be true,” said Boutin as he explained the Lazarus ruse to The Reg. “Maybe [the recipient’s] career could take off in a big way?”
Once into a target’s network the criminals would try to brute-force any Active Directory admin accounts they could find, as well as exfiltrate data by bundling it into a RAR archive and trying to upload it to a Dropbox account.
After the victim had been suitably reeled in, Lazarus would try to induce them to download a password-protected RAR archive “containing a LNK file.” Once clicked, that LNK file appeared to the victim to download a PDF containing job information. In the background, however, it also downloaded a malicious EXE that created a bunch of folders and set a Windows scheduled task to run a remote script every so often.
ESET illustration showing the Lazarus Group attack progression
The attackers were most insistent that the victim only respond to their job offer on a Windows machine running Internet Explorer. Once in, they resorted to PowerShell – taking advantage of the fact that “the logging of executed PowerShell commands is disabled by default,” although evidence was found that the Lazarus crew went through the connected domain to enumerate all Active Directory accounts before trying to brute-force their way into admin accounts.
To avoid Windows security features blocking their malware, Lazarus also signed their code using a certificate first issued to 16:20 Software LLC, an American firm said by ESET to have been incorporated in May 2010.
Among other clues linking the malware’s components back to North Korea, Boutin said his team had seen build timestamps “added by the compiler showing when the executable was compiled” which neatly cross-referenced with normal office hours for East Asia. Corroborating that were some “host fingerprinting” techniques which uncovered various digital fragments “similar to backdoors the Lazarus Group is known to use,” as Boutin put it.
What made the lure so sneaky was the fact it was targeting potential jobseekers looking to leave their current employer, a fact that Boutin speculated may have made some victims less likely to report it to their current employer’s cybersecurity teams.