The Bruce Murray Laboratory for Planetary Visualization has completed a 5.7 terapixel mosaic of the surface of Mars rendered at 5.0 m/px. Each pixel in the mosaic is about the size of a typical parking space, providing unprecedented resolution of the martian surface at the global scale.
The mosaic covers 99.5% of Mars from 88°S to 88°N. The pixels that make up the mosaic can all be mapped back to their source data, providing full traceability for the entire mosaic. The mosaic is available to stream over the internet and to download, as described below.
All data in the mosaic come from the Context Camera (CTX) onboard the Mars Reconnaissance Orbiter (MRO).
Below is the entire mosaic within a 3D viewer. Click “See the Mosaic in 3D,” or click here to see it in a new window.
Right now, developers simply need to declare to Google that account deletion is somehow possible, but beginning next year, developers will have to make it easier to delete data through both their app and an online portal. Google specifies:
For apps that enable app account creation, developers will soon need to provide an option to initiate account and data deletion from within the app and online.
This means any app that lets you create an account to use it is required to allow you to delete that information when you’re done with it (or rather, request the developer delete the data from their servers). Although you can request that your data be deleted now, it usually requires manually contacting the developer to remove it. This new policy would mean developers have to offer a kill switch from the get-go rather than having Android users do the leg work.
The web deletion requirement is particularly new and must be “readily discoverable.” Developers must provide a link to a web form from the app’s Play Store landing page, with the idea being to let users delete account data even if they no longer have the app installed. Per the existing Android developer policy, all apps must declare how they collect and handle user data—Google introduced the policy in 2021 and made it mandatory last year. When you go into the Play Store and expand the “Data Safety” section under each app listing, developers list out data collection by criteria.
Simply removing an app from your Android device doesn’t completely scrub your data. Like software on a desktop operating system, files and folders are sometimes left behind from when the app was operating. This new policy will hopefully help you keep your data secure by wiping any unnecessary account info from the app developer’s servers, but also hopes to cut down on straggling data on your device. Conversely, you don’t have to delete your data if you think you’ll come to the app later. When it says you have a “choice,” Google wants to ensure it can point to something obvious.
It’s unclear how Google will determine if a developer follows the rules. It is up to the app developer to disclose whether user-specific app data is actually deleted. Earlier this year, Mozilla called out Google after discovering significant discrepancies between the top 20 most popular free apps’ internal privacy policies and those they listed in the Play Store.
A Cornell University researcher has developed sonar glasses that “hear” you without speaking. The eyeglass attachment uses tiny microphones and speakers to read the words you mouth as you silently command it to pause or skip a music track, enter a passcode without touching your phone or work on CAD models without a keyboard.
Cornell Ph.D. student Ruidong Zhang developed the system, which builds off a similar project the team created using a wireless earbud — and models before that which relied on cameras. The glasses form factor removes the need to face a camera or put something in your ear. “Most technology in silent-speech recognition is limited to a select set of predetermined commands and requires the user to face or wear a camera, which is neither practical nor feasible,” said Cheng Zhang, Cornell assistant professor of information science. “We’re moving sonar onto the body.”
The researchers say the system only requires a few minutes of training data (for example, reading a series of numbers) to learn a user’s speech patterns. Then, once it’s ready to work, it sends and receives sound waves across your face, sensing mouth movements while using a deep learning algorithm to analyze echo profiles in real time “with about 95 percent accuracy.”
The system does this while offloading data processing (wirelessly) to your smartphone, allowing the accessory to remain small and unobtrusive. The current version offers around 10 hours of battery life for acoustic sensing. Additionally, no data leaves your phone, eliminating privacy concerns. “We’re very excited about this system because it really pushes the field forward on performance and privacy,” said Cheng Zhang. “It’s small, low-power and privacy-sensitive, which are all important features for deploying new, wearable technologies in the real world.”
Shadetree hackers—or, as they’re more commonly called, tech-savvy thieves—have found a new way to steal cars. No, it’s not a relay attack, Bluetooth exploit, key fob replay, or even a USB cable. Instead, these thieves are performing a modern take on hot-wiring without ever ripping apart the steering column.
Crafty criminals have resorted to using specially crafted devices that simply plug into the wiring harness behind the headlight of a victim’s car. Once they’re plugged in, they’re able to unlock, start, and drive away before the owner even catches wind of what’s going on.
Last year, Ian Tabor, who runs the UK chapter of Car Hacking Village, had his Toyota RAV4 stolen from outside of his home near London. Days prior to the theft, he found that thieves had damaged his car without successfully taking it. It wasn’t quite clear if it was a case of vandalism, or if the thieves had tried to make off with the car’s front bumper, but he did notice that the headlight harness had been yanked out.
Ultimately, his car wound up missing when thieves successfully made away with it. And after Tabor’s car was stolen, so was his neighbor’s Toyota Land Cruiser. But, folks, this is 2023. It’s not like you can just hotwire a car and drive away as the movies suggest. This got Tabor curious—after all, hacking cars is something he does for fun. How exactly did the thieves make off with his car?
Tabor got to work with Toyota’s “MyT” app. This is Toyota’s telematics system which pumps Diagnostic Trouble Codes up to the automaker’s servers rather than forcing you to plug in a code reader to the car’s OBD2 port. Upon investigation, Tabor noticed that his Rav4 kicked off a ton of DTCs just prior to being stolen—one of which was for the computer that controls the car’s exterior lighting.
This led Tabor to wonder if the thieves somehow made use of the vehicle CAN Bus network to drive away with his car. After scouring the dark web, Tabor was able to locate expensive tools claiming to work for various automakers and models, including BMW, Cadillac, Chrysler, Fiat, Ford, GMC, Honda, Jeep, Jaguar, Lexus, Maserati, Nissan, Toyota, as well as Volkswagen. The cost? As much as $5,400, but that’s a drop in the bucket if they can actually deliver on the promise of enabling vehicle theft.
Tabor decided to order one of these devices to try out himself. Together with Ken Tindell, the CTO of Canis Automotive Labs, the duo tore down a device to find out what made it tick and publish a writeup of their findings.
As it turns out, the expensive device was comprised of just $10 in components. The real magic is in the programming, which was set up to inject fake CAN messages into the car’s actual CAN Bus network. The messages essentially tricked the car into thinking a trusted key was present, which convinced the CAN Gateway (the component that filters out CAN messages into their appropriate segmented networks) into passing along messages instructing the car to disable its immobilizer, unlocking the doors, and essentially allowed the thieves to just away.
What’s more, is that the device simply looked like an ordinary portable speaker. The guts were stuffed inside the shell of a JBL-branded Bluetooth speaker, and all the thief needs to do is simply power the device on.
Once the device is on and plugged in, it wakes up the CAN network by sending a frame—similar to if you were to pull on a door handle, approach with a passive entry key, or hit a button on your fob. It then listens for a specific CAN message to begin its attack. The device then emulates a hardware error which tricks other ECUs on the CAN network to stop sending messages so that the attacking device has priority to send its spoofed messages to CAN devices.
The pause of valid messages is when the device is able to go into attack mode. It then sends the spoofed “valid key present” messages to the gateway which makes the car think that an actual valid key is being used to control the vehicle. Next, the attacker simply presses the speaker’s “play” button, and the car’s doors are unlocked.
Given that the manufacturer of these CAN injection devices claims that the devices are so effective against a myriad of makes and models, it would seem that this could be an industry-wide problem that may take some brainstorming to fix.
The good news is that this type of attack can be thwarted. While there are quick-and-dirty methods that could potentially be re-defeated in the long run, an automaker looking to prevent this type of attack by encrypting its CAN Bus network. According to Tindell, Canis is working on a similar project to retrofit U.S. military vehicles with a similar encryption scheme, similar to what he suggests as the fix for commercial vehicles experiencing this issue.
Several law enforcement agencies have teamed up to take down Genesis Market, a website selling access to “over 80 million account access credentials,” which included the standard usernames and passwords, as well as much more dangerous data like session tokens. According to a press release from the US Department of Justice, the site was seized on Tuesday. The European Union Agency for Law Enforcement Cooperation (or Europol) says that 119 of the site’s users have been arrested.
Genesis Marketplace has been around since 2018, according to the Department of Justice, and was “one of the most prolific initial access brokers (IABs) in the cybercrime world.” It let hackers search for certain types of credentials, such as ones for social media accounts, bank accounts, etc., as well as search for credentials based on where in the world they came from.
The agencies have teamed up with HaveIBeenPwned.com to make it easy for the public to check if their login credentials were stolen, and I’d highly recommend doing so — because of the way Genesis worked, this isn’t the typical “just change your password and you’ll be fine scenario.” For instructions on how to check whether Genesis was selling your stolen info, check out the writeup from Troy Hunt, who runs HaveIBeenPwned.
(The TL;DR is that you should sign up for HIBP’s email notification service with all of your important email addresses, and then be sure to click the “Verify email” button in the confirmation email. Just searching for your email on the site won’t tell you if you were impacted.)
[…]
While Genesis Marketplace traded in usernames and passwords, it also sold access to users’ cookies and browser fingerprints as well, which could let hackers bypass protections like two-factor authentication. Cookies — or login tokens, to be specific — are files that websites store on your computer to show that you’ve already logged in by correctly entering your password and two-factor authentication information. They’re the reason you don’t have to log into a website each time you visit it. (They’re also the reason that the joint effort to take down Genesis was given the delightful codename “Operation Cookie Monster.”)
[…]
Genesis stole the fingerprints, too. What’s more, it even provided a browser extension that let hackers spoof the victim’s fingerprint while using their login cookie to gain access to an account, according to a 2019 report from ZDNET.
A unit of the Russian military intelligence service GROe has hacked routers of Dutch private individuals and small and medium-sized companies. The Military Intelligence Service (MIVD) has discovered this, writes de Volkskrant.
The routers are part of a worldwide attack network and can, for example, destroy or paralyze the network of ministries. It is estimated that there are thousands of hacked devices in the hands of the Russian unit worldwide. In the Netherlands, this would involve several dozen routers.
The hacked devices are more advanced routers of computers often located at small businesses. The Russian unit will take over the routers and can monitor and control them, investigative journalist Huib Modderkolk told NOS Radio 1 Journaal.
According to him, this unit was created to sabotage: “It is also called the most dangerous hacking group in the world.” ‘We know what you’re doing’
The MIVD discovered the digital attack because the service saw many Dutch IP addresses. According to Modderkolk, the victims often do not realize that they have been hacked. By accepting the router’s default settings or using a simple password, these routers are easy to hack. Individuals and companies have now been informed by the MIVD.
It is striking that the MIVD makes this information public: “They hope for more awareness that this is actually going on, but the aim is also to let the Russians know: ‘we know what you are doing'”. According to Modderkolk, this is a development of recent years, and the British and Americans are also increasingly disclosing this type of sensitive information. Disinformation and cyber threats
The National Coordinator for Counterterrorism and Security (NCTV) has already warned of disinformation and cyber threats in connection with the war in Ukraine. These cyber attacks could affect the communication system of banks or hospitals, among others. At the moment there are no specific threats, but due to the rapid developments of the war, this could change quickly.
It is not clear whether the hack of the Russian hacker group has to do with the war in Ukraine.
Human memory might be even more unreliable than currently thought. In a new study, scientists found that it’s possible for people to form false memories of an event within seconds of it occurring. This almost-immediate misremembering seems to be shaped by our expectations of what should happen, the team says.
[…]
they recruited hundreds of volunteers over a series of four experiments to complete a task: They would look at certain letters and then be asked to recall one highlighted letter right after. However, the scientists used letters that were sometimes reversed in orientation, so the volunteers had to remember whether their selection was mirrored or not (for example, correctly identifying whether they saw c vs ↄ). They also focused on the volunteers who were highly confident about their choices during the task.
Overall, the participants regularly misremembered the letters, but in a specific way. People were generally good at remembering when a typical letter was shown, with their inaccuracy rates hovering around 10%. But they were substantially worse at remembering a mirrored letter, with inaccuracy rates up to 40% in some experiments. And, interestingly enough, their memory got worse the longer they had to wait before recalling it. When they were asked to recall what they saw a half second later, for instance, they were wrong less than 20% of the time, but when they were asked three seconds later, the rate rose as high as 30%.
According to Otten, the findings—published Wednesday in PLOS One—indicate that our memory starts being shaped almost immediately by our preconceptions. People expect to see a regular letter, and don’t get easily fooled into misremembering a mirrored letter. But when the unexpected happens, we might often still default to our missed prediction. This bias doesn’t seem to kick in instantaneously, though, since people’s short-term memory was better when they had to be especially quick on their feet.
“It is only when memory becomes less reliable through the passage of a tiny bit of time, or the addition of extra visual information, that internal expectations about the world start playing a role,” Otten said.
Some users of Microsoft’s free Outlook hosted service are finding they can no longer send or receive emails because of how the Windows giant now calculates the storage of attachments.
Microsoft account holders are allowed to hold up to 15GB in their cloud-hosted email, which until recently included text and attachments, and 5GB in their OneDrive storage. That policy changed February 1. Since then, attachments now count as part of the 5GB OneDrive allowance – and if that amount is exceeded, it throws a wrench into the email service.
It doesn’t change the storage amount available in Outlook.com, but could in OneDrive.
“This update may reduce how much cloud storage you have available to use with your OneDrive,” Microsoft wrote in a support note posted before the change. “If you reach your cloud storage quota, your ability to send and receive emails in Outlook.com will be disrupted.”
Redmond added that the plan was to gradually roll out the cloud storage changes and new quota bar starting February 1 across users’ app and Windows settings and Microsoft accounts. Two months later, that gradual rollout is beginning to hit more and more users.
One reader told The Register that his Outlook recently stopped working and indicated that he had surpassed the 5GB storage limit, reaching 6.1GB. He was unaware of the policy change, so he was confused when he saw that in his email account he had used only 6.8GB of the 15GB allowed.
It was the change in how attachments are added that tripped him up. Microsoft told him about the new policy.
No one deletes attachments every time an email is received. This is like blackmail “So instantly, I have lost 10GB of email capacity and because my attachments were greater than 5GB that instantly disabled my email and triggered bounce-backs (even sending and receiving with no attachments),” the reader told us.
“No one deletes attachments every time an email is received. This is like blackmail. MS is forcing us to buy a subscription by the back door or to have to delete emails with attachments on a regular basis ad infinitum.”
He isn’t the only one perplexed by the issue.
[…]
One who apparently was unaware that it was the attachments shifting over to OneDrive causing the email problems deleted a lot of emails, only to find it didn’t change the “storage used” amount.
“We could see inside people’s garages and their private properties,” a former employee told Reuters. “Let’s say that a Tesla customer had something in their garage that was distinctive, you know, people would post those kinds of things.”
One office in particular, located in San Mateo, reportedly had a “free-wheeling” atmosphere, where employees would share videos and images with wild abandon. These pics or vids would often be “marked-up” via Adobe photoshop, former employees said, converting drivers’ personal experiences into memes that would circulate throughout the office.
“The people who buy the car, I don’t think they know that their privacy is, like, not respected,” one former employee was quoted as saying. “We could see them doing laundry and really intimate things. We could see their kids.”
Another former employee seemed to admit that all of this was very uncool: “It was a breach of privacy, to be honest. And I always joked that I would never buy a Tesla after seeing how they treated some of these people,” the employee told the news outlet. Yes, it’s always a vote of confidence when a company’s own employees won’t use the products that they sell.
Privacy concerns related to Tesla’s data-guzzling autos aren’t exactly new. Back in 2021, the Chinese government formally banned the vehicles on the premises of certain military installations, calling the company a “national security” threat. The Chinese were worried that the cars’ sensors and cameras could be used to funnel data out of China and back to the U.S. for the purposes of espionage. Beijing seems to have been on to something—although it might be the case that the spying threat comes less from America’s spooks than it does from bored slackers back at Tesla HQ.
One of the reasons that Tesla’s cameras seem so creepy is that you can never really tell if they’re on or not. A couple of years ago, a stationary Tesla helped catch a suspect in a Massachusetts hate crime, when its security system captured images of the man slashing tires in the parking lot of a predominantly Black church. The man was later arrested on the basis of the photos.
Reuters notes that it wasn’t ultimately “able to determine if the practice of sharing recordings, which occurred within some parts of Tesla as recently as last year, continues today or how widespread it was.”
With all this in mind, you might as well always assume that your Tesla is watching, right? And, now that Reuters’ story has come out, you should also probably assume that some bored coder is also watching—potentially in the hopes of converting your dopiest in-car moment into a meme.
Private camera recordings, captured by cars, were shared in chat rooms: ex-workers Circulated clips included one of child being hit by car: ex-employees Tesla says recordings made by vehicle cameras ‘remain anonymous’ One video showed submersible vehicle from James Bond film, owned by Elon Musk
LONDON/SAN FRANCISCO, April 6 (Reuters) – Tesla Inc assures its millions of electric car owners that their privacy “is and will always be enormously important to us.” The cameras it builds into vehicles to assist driving, it notes on its website, are “designed from the ground up to protect your privacy.”
But between 2019 and 2022, groups of Tesla employees privately shared via an internal messaging system sometimes highly invasive videos and images recorded by customers’ car cameras, according to interviews by Reuters with nine former employees.
Some of the recordings caught Tesla customers in embarrassing situations. One ex-employee described a video of a man approaching a vehicle completely naked.
Also shared: crashes and road-rage incidents. One crash video in 2021 showed a Tesla driving at high speed in a residential area hitting a child riding a bike, according to another ex-employee. The child flew in one direction, the bike in another. The video spread around a Tesla office in San Mateo, California, via private one-on-one chats, “like wildfire,” the ex-employee said.
Other images were more mundane, such as pictures of dogs and funny road signs that employees made into memes by embellishing them with amusing captions or commentary, before posting them in private group chats. While some postings were only shared between two employees, others could be seen by scores of them, according to several ex-employees.
Tesla states in its online “Customer Privacy Notice” that its “camera recordings remain anonymous and are not linked to you or your vehicle.” But seven former employees told Reuters the computer program they used at work could show the location of recordings – which potentially could reveal where a Tesla owner lived.
One ex-employee also said that some recordings appeared to have been made when cars were parked and turned off. Several years ago, Tesla would receive video recordings from its vehicles even when they were off, if owners gave consent. It has since stopped doing so.
“We could see inside people’s garages and their private properties,” said another former employee. “Let’s say that a Tesla customer had something in their garage that was distinctive, you know, people would post those kinds of things.”
Tesla didn’t respond to detailed questions sent to the company for this report.
About three years ago, some employees stumbled upon and shared a video of a unique submersible vehicle parked inside a garage, according to two people who viewed it. Nicknamed “Wet Nellie,” the white Lotus Esprit sub had been featured in the 1977 James Bond film, “The Spy Who Loved Me.”
The vehicle’s owner: Tesla Chief Executive Elon Musk, who had bought it for about $968,000 at an auction in 2013. It is not clear whether Musk was aware of the video or that it had been shared.
To report this story, Reuters contacted more than 300 former Tesla employees who had worked at the company over the past nine years and were involved in developing its self-driving system. More than a dozen agreed to answer questions, all speaking on condition of anonymity.
Reuters wasn’t able to obtain any of the shared videos or images, which ex-employees said they hadn’t kept. The news agency also wasn’t able to determine if the practice of sharing recordings, which occurred within some parts of Tesla as recently as last year, continues today or how widespread it was. Some former employees contacted said the only sharing they observed was for legitimate work purposes, such as seeking assistance from colleagues or supervisors.
In a future fight, control of advanced drones belonging to the U.S. Navy and U.S. Air Force could be passed back and forth between assets from either service as the situation demands. Uncrewed platforms are set to make up the majority of the Navy’s future carrier air wings, with up to 60 percent of all aircraft on each flattop eventually being pilotless.
Navy Rear Adm. Andrew “Bucket” Loiselle provided details on the service’s advanced aviation plans, including new drones and sixth-generation crewed stealth combat jets, and cooperation with the Air Force on these efforts during a panel discussion yesterday at the Navy League’s annual Sea-Air-Space conference and exhibition. These efforts are part of the service’s broader Next Generation Air Dominance (NGAD) program that you can learn about here. Loiselle is currently the director of the Air Warfare Division, also referred to as N98, within the Office of the Chief of Naval Operations.
[…]
n a future fight, control of advanced drones belonging to the U.S. Navy and U.S. Air Force could be passed back and forth between assets from either service as the situation demands. Uncrewed platforms are set to make up the majority of the Navy’s future carrier air wings, with up to 60 percent of all aircraft on each flattop eventually being pilotless.
Navy Rear Adm. Andrew “Bucket” Loiselle provided details on the service’s advanced aviation plans, including new drones and sixth-generation crewed stealth combat jets, and cooperation with the Air Force on these efforts during a panel discussion yesterday at the Navy League’s annual Sea-Air-Space conference and exhibition. These efforts are part of the service’s broader Next Generation Air Dominance (NGAD) program that you can learn about here. Loiselle is currently the director of the Air Warfare Division, also referred to as N98, within the Office of the Chief of Naval Operations.
In a future fight, control of advanced drones belonging to the U.S. Navy and U.S. Air Force could be passed back and forth between assets from either service as the situation demands. Uncrewed platforms are set to make up the majority of the Navy’s future carrier air wings, with up to 60 percent of all aircraft on each flattop eventually being pilotless.
Navy Rear Adm. Andrew “Bucket” Loiselle provided details on the service’s advanced aviation plans, including new drones and sixth-generation crewed stealth combat jets, and cooperation with the Air Force on these efforts during a panel discussion yesterday at the Navy League’s annual Sea-Air-Space conference and exhibition. These efforts are part of the service’s broader Next Generation Air Dominance (NGAD) program that you can learn about here. Loiselle is currently the director of the Air Warfare Division, also referred to as N98, within the Office of the Chief of Naval Operations.
null
null
“As we looked upon that air wing of the future, we have numerous unmanned systems,” Loiselle said. “You’ve heard talk about CCAs [and] MQ-25.”
CCA stands for Collaborative Combat Aircraft and is a term that originated with the Air Force to describe future advanced drones with high degrees of autonomy intended to operate collaboratively with crewed platforms. Secretary of the Air Force Frank Kendall announced earlier this year that the service had begun doing future planning around a fleet of at least 1,000 CCAs, as well as 200 crewed sixth-generation stealth combat jets, all being developed as part of its own separate multi-faceted NGAD program. The CCA figure was based on a notional concept of operations that would pair two of the drones with each of the 200 NGAD combat jets and 300 stealthy F-35A Joint Strike Fighters.
However, the Air Force is still very much refining its CCA fleet structure plans, which could grow to include an even larger total number of CCAs with different types geared toward different mission sets. It’s also still figuring out how it intends to deploy and employ them. The Navy appears to be doing much the same, in increasingly close coordination with the Air Force.
“We’re developing an unmanned control station that’s already installed on three aircraft carriers, and that will be the control station for any UAS [uncrewed aerial systems] that we buy,” Rear Adm. Loiselle added. “[There is] unbelievable cooperation with the Air Force right now in the development of mission systems for both sixth-gen [combat jets] and CCAs… I’m very close to getting a signed agreement with the Air Force where we’re going to have the ability for the Navy to control Air Force CCAs and the Air Force to control Navy CCAs.”
The Navy has previously said that the MQ-25 would be deployed first on the Nimitz class carriers USS Dwight D. Eisenhower and USS George H.W. Bush, and the latter ship has been actively used for testing that drone. It was announced last year that the plans had changed and that USS Theodore Roosevelt, another Nimitz class ship, would be the first to host the Stingray.
The expectation is that future CCAs will also be able to be controlled by various aircraft in the course of operations. The Navy has specifically said in the past that one of the core missions for its future sixth-generation crewed combat jet, also referred to as F/A-XX, will be acting as a “quarterback” for drones.
For the Navy and the Air Force, being able to readily exchange control of future drones will be key to ensuring operational flexibility. During the panel discussion yesterday, Rear Adm. Loiselle outlined a broader future naval vision where this capability could be particularly valuable.
[…]
“The bottom line is when we’re building our future force that’s going to be 60 percent unmanned, then we’re going to look different than we do today. And we are no longer going to have a fighting force that has 44 strike fighters on the deck, because that’s incompatible with a 60 percent unmanned air wing,” the rear admiral explained. “So we’re going to have to change the narrative, from 44 strike fighters to how many targets can I get at what range at what time intervals, because that’s the true metric that matters.”
“The type of platform that delivers that ordnance is less important than the ability to do so,” he continued. “So we need to look at the entire portfolio that is present within the carrier strike group and how we generate that effect. Equally, we need to be cognizant of what’s available in the joint force, such that we don’t duplicate capabilities that would work within our part of that plan execution.”
[…]
With all this in mind, carrier strike groups, as well as potentially other naval assets, being able to readily take control of Air Force drones during operations in certain circumstances, and vice versa, could be extremely useful. A Navy carrier air wing or Air Force elements in the same region might be able to provide more on-demand escorts or other support for each other’s crewed platforms, including tactical combat jets and larger aircraft like bombers, tankers, and airlifters. Current and future Air Force assets capable of flying very long distances themselves, such as the forthcoming B-21 Raider stealth bomber, could even take control of Navy uncrewed aircraft using more localized line-of-sight links to help with their immediate missions, too.
null
For instance, long-range Air Force platforms like the B-21 could ‘pick up’ CCAs launched from a carrier operating far forward of any land base. They would then fly their mission into contested airspace with the help of their unmanned wingmen, then return them back to Navy control once they head back out of the high-threat area and towards the carrier’s area of operation. Unmanned tactical aircraft have a significant range advantage over their manned counterparts, which is a factor as well.
Beyond this, just being able to share fleets when in the air between the services opens up huge possibilities and operational synergies.
An Australian engineering company has created a cardboard drone that runs on open source software, standard hardware, and can be assembled and flown with no prior experience.
The Corvo Precision Payload Delivery System (PPDS) costs less than $3,500 apiece, a price made possible by the craft’s use of FOSS and commercial-off-the-shelf hardware.
Michael Partridge, SYPAQ’s general manager for Innovation & Strategic Programs (I&SP), told The Register that Corvo uses ArduPilot autopilot software, unspecified hardware that SYPAQ customizes, and waxed cardboard.
The drone takes around an hour to assemble, we’re told, and its lithium-ion batteries give it a range of up to 100km (62 miles) with a 3kg (6.6lb) payload.
The craft ships in a flat pack complete with tape, glue, and instructions on how to assemble it. A tablet computer is also included so users can tell Corvo where to fly by entering GPS coordinates. A wired connection to upload that flight plan is required, but once Corvo is aloft, it will proceed along its route, at a specified altitude, and land itself at its determined destination.
Partridge declined to discuss details of the tech on board the drones for operational reasons but said SYPAQ has ensured that flight plans are encrypted so that if a Corvo is captured, the location of its pilots can’t be retrieved.
SYPAQ will happily ship a single Corvo, but also offers a “capability pack” that includes multiple craft, spares, and the slingshot-powered launch ramp the craft needs to get airborne.
Partridge said single Corvo units have survived more than 20 flights and that the waxed cardboard wing can handle moisture well, without losing its aerodynamic qualities.
Users in the Ukrainian armed forces have adapted the craft to different roles too. Partridge said adding a camera requires some light hacking – of the drone’s cardboard airframe.
“It has a cargo bay [and] you can do whatever you want in there within the 3kg payload. You can cut a hole through the aircraft to look through it and insert a camera.”
For now, SYPAQ hasn’t given Corvo’s onboard computer wireless capabilities, partly to reduce cost and partly to ensure stealth. But Partridge said Corvos have carried action cameras like the GoPro and users are happy to retrieve removable media once the plane lands. SYPAQ is working on payloads that allow wireless transmission of images, possibly over long distances.