The Linkielist

Linking ideas with the world

The Linkielist

Watch SpaceX’s Starship SN4 prototype accidentally self-destruct in a rocket test burn – just before the Falcon launches people at the ISS

In yet another setback for Elon Musk’s beloved steel spaceship, a SpaceX Starship prototype has exploded on the pad during a rocket test burn.

Starship SN4 – designed to ferry astronauts to the Moon and Mars – was undergoing a static engine fire test on Friday when, in scientific terminology, it blew the hell up. Footage of the explosive experiment, captured by news site NASASpaceFlight and embedded below, appears to show the prototype’s rocket venting fuel, or some other material, shortly before a “major anomaly” occurred in which the craft rapidly scattered itself across the Boca Chica testing facility in Texas.

Fortunately, nobody appears to have been hurt, other than some feelings at SpaceX. And maybe possibly the US Federal Aviation Administration, which this week reportedly granted permission for suborbital Starship test flights.

“It looked to me like the lower tank began leaking after a static fire, probably caused a loss of pressure that then resulted in the tank being crushed by the mass in the upper tank,” surmised Scott Manley, an Apple software engineer and amateur astronomer.

“The failure of both tanks lead to ignition and RUD.” That’s rocket-speak for Rapid Unscheduled Disassembly, a term Elon Musk has used in a Pythonesque way before.

Nevertheless, the explosion was large enough to be picked up by the radar systems of local weather stations in the US state.

SpaceX could not immediately be reached for comment on the destruction of its prototype. Founder Elon was also absent any sort of statement; both have opted instead to focus on this weekend’s manned Dragon launch.

It should be emphasized that the SN4 prototype that detonated on Friday is not the same as the SpaceX Dragon crew capsule and Falcon rocket set to take off from Florida on Saturday with American ‘nauts onboard heading to the International Space Station. That combo, the Falcon 9 rocket and its Dragon capsule, has proven itself over several flights to be significantly less prone to fits of rapid unscheduled disassembly.

The crewed Dragon pod was slated to take off from Cape Canaveral on May 27, but had to be scrubbed due to bad weather. Saturday’s launch also faces the possibility of being called off by the weather.

The Starship craft, by comparison, is still in its prototype phase and undergoing early tests. These sort of blow-ups are not unheard of with craft this early in development: new rockets are hard at first, as you can see below.

The Falcon-lifted Dragon spacecraft are meant to be SpaceX’s commercial bread and butter, delivering crew to the orbiting space station using American-owned and launched rockets without having to go cap in hand to the Russians. Boeing, too, is taking a shot at the market, although its project is behind schedule.

Starship is seen as a more ambitious long-term effort to create a vessel capable of not only heavy lifts, but also flights to the Moon and Mars. When you’re dealing with rocketry on that scale, mistakes are going to happen – that’s why you make them on the ground first.

Still, this is not going to be something SpaceX will want to see less than 24 hours before the most significant launch in its history is scheduled to go off. Test or not, a rocket exploding on the pad is a bad look for many.

As for Saturday’s mission – an all American crew in an all American craft lifting off from American soil for the first time since 2011 – the forecast is changeable, so another scrub is possible. We’ll be watching and keeping you updating as it happens

Source: Watch SpaceX’s Starship SN4 prototype accidentally self-destruct in a rocket test burn • The Register

I would not have felt too happy sitting on that manned flight

After 30 years of searching, astroboffins finally detect the universe’s ‘missing matter’ – using fast radio bursts

Astronomers have finally found hard-to-detect visible matter scattered across space, left over from the Big Bang, after searching for nearly thirty years, according to a study published in Nature.

“We know from measurements of the Big Bang how much matter there was in the beginning of the Universe,” said Jean-Pierre Macquart, lead author of the paper and an associate professor at Curtin University, Australia, this week. “But when we looked out into the present universe, we couldn’t find half of what should be there. It was a bit of an embarrassment.”

Macquart isn’t referring to dark energy or dark matter. Instead, the study deals with baryonic matter, which is normal stuff made of protons and neutrons. The computer or device you’re using right now to read this is made up of it. This matter should also be out there in space, too, lingering between the galaxies and stars, but it’s missing, or rather, boffins couldn’t find it. The material is spread incredibly thinly across the void, making it difficult to detect.

But now scientists have managed to find some of that missing matter by inspecting fast radio bursts, which are powerful radio waves that are emitted for a few milliseconds. By following the line-of-sight of each blast, they were able to determine the electron column density, and count every baryon ionized by the electromagnetic wave.

“The radiation from fast radio bursts gets spread out by the missing matter in the same way that you see the colours of sunlight being separated in a prism,” Macquart said. “We’ve now been able to measure the distances to enough fast radio bursts to determine the density of the universe. We only needed six to find this missing matter.”

The density of missing matter they found was tiny; it’s equivalent to about “about one or two atoms in a room the size of an average office.” The measurement allows the academics to estimate the amount of missing matter in the universe.

The fast radio bursts were observed using the Australian Square Kilometre Array Pathfinder (ASKAP) telescope array at the Murchison Radio-astronomy Observatory located in Western Australia. “ASKAP both has a wide field of view, about 60 times the size of the full Moon, and can image in high resolution,” said Ryan Shannon, co-author of the paper and an associate professor at Swinburne University of Technology.

“This enables the precision to determine the location of the fast radio burst to the width of a human hair held 200m away,” he concluded.

Source: After 30 years of searching, astroboffins finally detect the universe’s ‘missing matter’ – using fast radio bursts • The Register

Asteroid, climate change not responsible for mass extinction 215 million years ago

A team of University of Rhode Island scientists and statisticians conducted a sophisticated quantitative analysis of a mass extinction that occurred 215 million years ago and found that the cause of the extinction was not an asteroid or climate change, as had previously been believed. Instead, the scientists concluded that the extinction did not occur suddenly or simultaneously, suggesting that the disappearance of a wide variety of species was not linked to any single catastrophic event.

Their research, based on paleontological field work carried out in sediments 227 to 205 million years old in Petrified Forest National Park, Arizona, was published in April in the journal Geology.

According to David Fastovsky, the URI professor of geosciences whose graduate student, Reilly Hayes, led the study, the global of ancient Late Triassic vertebrates—the disappearance of which scientists call the Adamanian/Revueltian turnover—had never previously been reconstructed satisfactorily. Some researchers believed the extinction was triggered by the Manicouagan Impact, an asteroid impact that occurred in Quebec 215.5 million years ago, leaving a distinctive 750-square-mile lake. Others speculated that the extinction was linked to a hotter and that occurred at about the same time.

“Previous hypotheses seemed very nebulous, because nobody had ever approached this problem—or any ancient mass extinction problem—in the quantitative way that we did,” Fastovsky said. “In the end, we concluded that neither the asteroid impact nor the climate change had anything to do with the extinction, and that the extinction was certainly not as it had been described—abrupt and synchronous. In fact, it was diachronous and drawn-out.”

The Adamanian/Revueltian turnover was the perfect candidate for applying the employed by the research team, Fastovsky said. Because the fossil-rich layers at Petrified Forest National Park preserve a diversity of vertebrates from the period, including crocodile-like phytosaurs, armored aetosaurs, early dinosaurs, large crocodile-like amphibians, and other land-dwelling vertebrates, Hayes relocated the sites where known fossils were discovered and precisely determined their age by their position in the rock sequence. He was assisted by URI geosciences majors Amanda Bednarick and Catherine Tiley.

Hayes and URI Statistics Professor Gavino Puggioni then applied several Bayesian statistical algorithms to create “a probabilistic estimate” of when the animals most likely went extinct. This method allowed for an unusually precise assessment of the likelihood that the Adamanian vertebrates in the ancient ecosystem went extinct dramatically and synchronously, as would be expected with an asteroid impact.

Previous research concluded that the asteroid impact occurred 215.5 million years ago and the climate change some 3 to 5 million years later. The URI researchers demonstrated that the extinctions happened over an extended period between 222 million years ago and 212 million years ago. Some species of armored archosaurs Typothorax and Paratypothorax, for instance, went extinct about 6 million years before the impact and 10 million years before the climate change, while those of Acaenasuchus, Trilophosaurus and Calyptosuchus went extinct 2 to 3 million years before the impact. Desmatosuchus and Smilosuchus species, on the other hand, went extinct 2 to 3 million years after the impact and during the very early stages of the change.

“It was a long-lasting suite of extinctions that didn’t really occur at the same time as the impact or the or anything else,” Fastovsky said. “No known instantaneous event occurred at the same time as the extinctions and thus might have caused them.”

The URI professor believes it will be difficult to apply these quantitative methods to calculate other mass extinctions because equally rich fossil data and precise radiometric dates for them aren’t available at other sites and for other time periods.

“This was like a test case, a perfect system for applying these techniques because you had to have enough fossils and sufficiently numerous and precise dates for them,” he said. “Other extinctions could potentially be studied in a similar way, but logistically it’s a tall mountain to climb. It’s possible there could be other ways to get at it, but it’s very time consuming and difficult.”

Journal information: Geology

Source: Asteroid, climate change not responsible for mass extinction 215 million years ago

Photostopped: Adobe Cloud evaporates in mass outage. Hope none of you are on a deadline, eh? – yay cloud!

Adobe technicians scrambled on Wednesday to restore multiple cloud services after a severe outage left customers stranded.

Starting around 0600 PDT (1300 UTC) Adobe’s status board began lighting up with red outage notifications. At the time this article was written, 13 major issues were ongoing and five had been resolved. By issues, Adobe means people can’t use some of its stuff in the cloud nor access their documents.

Creative Cloud reported eight major issues in progress, Experience Cloud had two, and Adobe Services had four.

Adobe Stock, Cloud Documents, Team Projects, Premiere Rush, Creative Cloud Assets, Collaboration, Publish Services, Adobe Admin Console, Spark, Lightroom, Account Management, and Sign In were all having trouble in the Americas region.

So too were Adobe Analytics, Experience Manager, Social, Target, Audience Manager, Cross-Cloud Capabilities, Campaign, Platform Core Services, Data Science Workspace, Experience Cloud Home, Data Foundation, Query Service, and Journey Orchestration.

Adobe didn’t respond to a request for more information about the problems. We note its status board says not all customers are necessarily affected by the IT breakdown.

Via Twitter, the Photoshop giant’s support account said an inquiry into the outage is underway. “Our teams are investigating the issue and working to get this resolved ASAP,” the company said.

Adobe Status Board

The Adobe status board right now

Predictably, customers who recall when Adobe software ran locally lamented their dependence on Adobe’s cloud.

“Adobe’s servers are currently down,” wrote Element Animation on Twitter. “If you pay for any of their software, you can’t use it right now. Remember when we used to own our own software?”

Source: Photostopped: Adobe Cloud evaporates in mass outage. Hope none of you are on a deadline, eh? • The Register

Live analytics without vendor lock-in? It’s more likely than you think, says Redis Labs

In February, Oracle slung out a data science platform that integrated real-time analytics with its databases. That’s all well and good if developers are OK with the stack having a distinctly Big Red hue, but maybe they want choice.

This week, Redis Labs came up with something for users looking for help with the performance of real-time analytics – of the kind used for fraud detection or stopping IoT-monitored engineering going kaput – without necessarily locking them into a single database, cloud platform or application vendor.

Redis Labs, which backs the open-source in-memory Redis database, has built what it calls an “AI serving platform” in collaboration with AI specialist Tensorwerk.

RedisAI includes deploying the model, running the inferencing and performance monitoring within the database bringing analytics closer to the data, and improving performance, according to Redis Labs.

Bryan Betts, principal analyst with Freeform Dynamics, told us the product was aimed at a class of AI apps where you need to constantly monitor and retrain the AI engine as it works.

“Normally you have both a compute server and a database at the back end, with training data moving to and fro between them,” he said. “What Redis and Tensorwerk have done is to build the AI computation ability that you need to do the retraining right into the database. This should cut out a stack of latency – at least for those applications that fit its profile, which won’t be all of them.”

Betts said other databases might do the same, but developers would have to commit to specific AI technology. To accept that lock-in, they would need to be convinced the performance advantages outweigh the loss of the flexibility to choose the “best” AI engine and database separately.

IDC senior research analyst Jack Vernon told us the Redis approach was similar to that of Oracle’s data science platform, where the models sit and run in the database.

“On Oracle’s side, though, that seems to be tied to their cloud,” he said. “That could be the real differentiating thing here: it seems like you can run Redis however you like. You’re not going to be tied to a particular cloud infrastructure provider, unlike a lot of the other AI data science platforms out there.”

SAP, too, offers real-time analytics on its in-memory HANA database, but users can expect to be wedded to its technologies, which include the Leonardo analytics platform.

Redis Labs said the AI serving platform would give developers the freedom to choose their own AI back end, including PyTorch and TensorFlow. It works in combination with RedisGears, a serverless programmable engine that supports transaction, batch, and event-driven operations as a single data service and integrates with application databases such as Oracle, MySQL, SQLServer, Snowflake or Cassandra.

Yiftach Shoolman, founder and CTO at Redis Labs, said that while researchers worked on improving the chipset to boost AI performance, this was not necessarily the source of the bottleneck.

“We found that in many cases, it takes longer to collect the data and process it before you feed it to your AI engine than the inferences itself takes. Even if you improve your inferencing engine by an order of magnitude, because there is a new chipset in the market, it doesn’t really affect the end-to-end inferencing time.”

Analyst firm Gartner sees increasing interest in AI ops environments over the next four years to improve the production phase of the process. In the paper “Predicts 2020: Artificial Intelligence Core Technologies”, it says: “Getting AI into production requires IT leaders to complement DataOps and ModelOps with infrastructures that enable end-users to embed trained models into streaming-data infrastructures to deliver continuous near-real-time predictions.”

Vendors across the board are in an arms race to help users “industrialise” AI and machine learning – that is to take it from a predictive model that tells you something really “cool” to something that is reliable, quick, cheap and easy to deploy. Google, AWS and Azure are all in the race along with smaller vendors such as H2O.ai and established behemoths like IBM.

While big banks like Citi are already some way down the road, vendors are gearing up to support the rest of the pack. Users should question who they want to be wedded to, and what the alternatives are

Source: Live analytics without vendor lock-in? It’s more likely than you think, says Redis Labs • The Register

Qatar’s contact tracing app put over one million people’s info at risk

Contact tracing apps have the potential to slow the spread of COVID-19. But without proper security safeguards, some fear they could put users’ data and sensitive info at risk. Until now, that threat has been theoretical. Today, Amnesty International reports that a flaw in Qatar’s contact tracing app put the personal information of more than one million people at risk.

The flaw, now fixed, made info like names, national IDs, health status and location data vulnerable to cyberattacks. Amnesty’s Security Lab discovered the flaw on May 21st and says authorities fixed it on May 22nd. The vulnerability had to do with QR codes that included sensitive info. The update stripped some of that data from the QR codes and added a new layer of authentication to prevent foul play.

Qatar’s app, called EHTERAZ, uses GPS and Bluetooth to track COVID-19 cases, and last week, authorities made it mandatory. According to Amnesty, people who don’t use the app could face up to three years in prison and a fine of QR 200,000 (about $55,000).

“This incident should act as a warning to governments around the world rushing out contact tracing apps that are too often poorly designed and lack privacy safeguards. If technology is to play an effective role in tackling the virus, people need to have confidence that contact tracing apps will protect their privacy and other human rights,” said Claudio Guarnieri, head of Amnesty International’s Security Lab.

Source: Qatar’s contact tracing app put over one million people’s info at risk | Engadget

Samsung launches stand alone mobile security chip

Samsung will launch a new standalone turnkey security chip to protect mobile devices, the company announced today.

The chip, which has the said-once-never-forgotten name “S3FV9RR” – aka the Mobile SE Guardian 4 – is a follow-up to the dedicated security silicon baked into the Galaxy S20 smartphone series launched in February 2020.

The new chip is Common Criteria Assurance Level 6+ certified, the highest certification that a mobile component has received, according to Samsung. CC EAL 6+ is used in e-passports and hardware wallets for cryptocurrency.

It has twice the storage capacity of the first-gen chip and supports device authorisation, hardware-based root of trust, and secure boot features. When a bootloader initiates, the chip initiates a chain of trust sequence to validate each components’ firmware. The chip can also work independently from the device’s main processor to ensure tighter security.

“In this era of mobility and contactless interactions, we expect our connected devices, such as smart phones or tablets, to be highly secure so as to protect personal data and enable fintech activities such as mobile banking, stock trading and cryptocurrency transactions,” said Dongho Shin, senior vice president of marketing at Samsung System LSI, which makes logic chips for the South Korean conglomerate.

“With the new standalone security element solution (S3FV9RR), Samsung is mounting a powerful deadbolt on smart devices to safeguard private information.” Which should be handy for all manner of devices – perhaps even Internet of things devices.

Source: Galaxy S20 security is already old hat as Samsung launches new safety silicon • The Register

Ex-Green Beret arrested in Carlos Ghosn case has no stranger to danger

This Dec. 30, 2019, image from security camera video shows Michael L. Taylor, center, and George-Antoine Zayek at passport control at Istanbul Airport in Turkey. Taylor, a former Green Beret, and his son, Peter Taylor, 27, were arrested Wednesday in Massachusetts on charges they smuggled Nissan ex-Chairman Carlos Ghosn out of Japan in a box in December 2019, while he awaited trial there on financial misconduct charges. / AP

Decades before a security camera caught Michael Taylor coming off a jet that was carrying one of the world’s most-wanted fugitives, the former Green Beret had a hard-earned reputation for taking on dicey assignments.

Over the years, Taylor had been hired by parents to rescue abducted children. He went undercover for the FBI to sting a Massachusetts drug gang. And he worked as a military contractor in Iraq and Afghanistan, an assignment that landed him in a Utah jail in a federal fraud case.

So when Taylor was linked to the December escape of former Nissan CEO Carlos Ghosn from Japan, where the executive awaited trial on financial misconduct charges, some in U.S. military and legal circles immediately recognized the name.

Taylor has “gotten himself involved in situations that most people would never even think of, dangerous situations, but for all the right reasons,” Paul Kelly, a former federal prosecutor in Boston who has known the security consultant since the early 1990s, said earlier this year.

“Was I surprised when I read the story that he may have been involved in what took place in Japan? No, not at all.”

Wednesday, after months as fugitives, Taylor, 59, and his son, Peter, 27, were arrested in Massachusetts on charges accusing them of hiding Ghosn in a shipping case drilled with air holes and smuggling him out of Japan on a chartered jet. Investigators were still seeking George-Antoine Zayek, a Lebanese-born colleague of Taylor.

“He is the most all-American man I know,” Taylor’s assistant, Barbara Auterio, wrote to a federal judge before his sentencing in 2015. “His favorite song is the national anthem.”

Kelly, now serving as the attorney for the Taylors, said they plan to challenge Japan’s extradition request “on several legal and factual grounds.”

“Michael Taylor is a distinguished veteran and patriot, and both he and his son deserve a full and fair hearing regarding these issues,” Kelly said in an email.

Some of those who know Taylor say he is a character of questionable judgment, with a history of legal troubles dating back well before the Utah case. But others praise him as a patriot, mentor and devoted family man, who regularly put himself at risk for his clients, including some with little ability to pay.

“He is the most all-American man I know,” Taylor’s assistant, Barbara Auterio, wrote to a federal judge before his sentencing in 2015. “His favorite song is the national anthem.”

In 1993, a Massachusetts state trooper investigated Taylor for drug running and sued his supervisor after being told to stop scrutinizing the prized FBI informant. In 1998, Taylor was granted immunity in exchange for testifying against a Teamsters official accused of extortion. In 1999, he pleaded guilty to planting marijuana in the car of a client’s estranged wife, leading to her arrest, according to a 2001 report in the Boston Herald.

Taylor also made headlines in 2011 when he resigned as football coach at a Massachusetts prep school, Lawrence Academy, which was stripped of two titles. Taylor was accused of inappropriate donations, including covering tuition for members of a team that included seven Division I recruits.

“Michael Taylor was the only person in this great country that was able to help me, and he did,” a California woman whose son was taken to Beirut, wrote to the sentencing judge in the Utah military contracting case. “Michael Taylor brought my son back.”

“It wasn’t pleasant what he was yelling at us across the field. He was calling us out for not being man enough to kick the ball,” said John Mackay, who opposed Taylor as coach of St. George’s School in Rhode Island. “His zeal, probably like he does everything in life, is to the Nth degree.”

The security business that Taylor and a partner set up decades ago was initially focused on private investigations but their caseload grew through corporate work and unofficial referrals from the State Department and FBI, including parents whose children had been taken overseas by former spouses.

“Michael Taylor was the only person in this great country that was able to help me, and he did,” a California woman whose son was taken to Beirut, wrote to the sentencing judge in the Utah military contracting case. “Michael Taylor brought my son back.”

In 2012, federal prosecutors alleged that Taylor won a U.S. military contract to train Afghan soldiers by using secret information passed along from an American officer. The prosecutors said that when Taylor learned the contract was being investigated, he asked an FBI agent and friend to intervene.

The government seized $5 million from the bank account of Taylor’s company and he spent 14 months in jail before agreeing to plead guilty to two counts. The government agreed to return $2 million to the company as well as confiscated vehicles.

The plot to free Ghosn apparently began last fall, when operatives began scouting Japanese terminals reserved for private jets. Tokyo has two airports within easy reach of Ghosn’s home. But the group settled on the private terminal at Osaka’s Kansai International Airport, where machines used to X-ray baggage could not accommodate large boxes.

On the day of the escape, Michael Taylor and Zayek flew into Japan on a chartered jet with two large black boxes, claiming to be musicians carrying audio equipment, according to court papers.

Around 2:30 that afternoon, Ghosn, free on hefty bail, left his house on a leafy street in Tokyo’s Roppongi neighborhood and walked to the nearby Grand Hyatt Hotel, going to a room there and departing two hours later to board a bullet train for Osaka.

That evening, his rescuers wheeled shipping boxes through the Osaka private jet terminal known as Premium Gate Tamayura — “fleeting moment” in Japanese. Terminal employees let the men pass without inspecting their cargo.

At 11:10 p.m., the chartered Bombardier, its windows fitted with pleated shades, lifted off. The flight went first to Turkey, then to Lebanon, where Ghosn has citizenship, but which has no extradition treaty with Japan.

“I didn’t run from justice,” Ghosn told reporters after he resurfaced. “I left Japan because I wanted justice.”

Source: Ex-Green Beret arrested in Carlos Ghosn case has done dangerous work | Autoblog

Sir Richard Branson: Virgin Orbit rocket launch from 747 fails on debut flight

The booster was released from under the wing of one of the UK entrepreneur’s old jumbos which had been specially converted for the task.

The rocket ignited its engine seconds later but an anomaly meant the flight was terminated early.

Virgin Orbit’s goal is to try to capture a share of the emerging market for the launch of small satellites.

It’s not clear at this stage precisely what went wrong but the firm had warned beforehand that the chances of success might be only 50:50.

The history of rocketry shows that maiden outings very often encounter technical problems.

“Test flights are instrumented to yield data and we now have a treasure trove of that. We accomplished many of the goals we set for ourselves, though not as many as we would have liked,” said Virgin Orbit CEO Dan Hart.

“Nevertheless, we took a big step forward today. Our engineers are already poring through the data. Our next rocket is waiting. We will learn, adjust, and begin preparing for our next test, which is coming up soon.”

Source: Sir Richard Branson: Virgin Orbit rocket fails on debut flight – BBC News

Create Deepfakes in 5 Minutes with First Order Model Method

et’s explore a bit how this method works. The whole process is separated into two parts: Motion Extraction and Generation. As an input the source image and driving video are used. Motion extractor utilizes autoencoder to detect keypoints and extracts first-order motion representation that consists of sparse keypoints and local affine transformations. These, along with the driving video are used to generate dense optical flow and occlusion map with the dense motion network. Then the outputs of dense motion network and the source image are used by the generator to render the target image.

First Order Model Approach

This work outperforms state of the art on all the benchmarks. Apart from that it has features that other models just don’t have. The really cool thing is that it works on different categories of images, meaning you can apply it to face, body, cartoon, etc. This opens up a lot of possibilities. Another revolutionary thing with this approach is that now you can create good quality Deepfakes with a single image of the target object, just like we use YOLO for object detection.

Keypoints Detection

If you want to find out more about this method, check out the paper and the code. Also, you can watch the following video:

Building your own Deepfake

As mention, we can use already trained models and use our source image and driving video to generate deepfakes. You can do so by following this Collab notebook.

In essence, what you need to do is clone the repository and mount your Google Drive. Once that is done, you need to upload your image and driving video to drive. Make sure that image and video size contains only face, for the best results. Use ffmpeg to crop the video if you need to. Then all you need is to run this piece of code:

source_image = imageio.imread('/content/gdrive/My Drive/first-order-motion-model/source_image.png')
driving_video = imageio.mimread('driving_video.mp4', memtest=False)


#Resize image and video to 256x256

source_image = resize(source_image, (256, 256))[..., :3]
driving_video = [resize(frame, (256, 256))[..., :3] for frame in driving_video]

predictions = make_animation(source_image, driving_video, generator, kp_detector, relative=True,
                             adapt_movement_scale=True)

HTML(display(source_image, driving_video, predictions).to_html5_video())

Here is my experiment with image of Nikola Tesla and a video of myself:

Conclusion

We are living in a weird age in a weird world. It is easier to create fake videos/news than ever and distribute them. It is getting harder and harder to understand what is truth and what is not. It seems that nowadays we can not trust our own senses anymore. Even though fake video detectors are also created, it is just a matter of time before the information gap is too small and even the best fake detectors can not detect if the video is true or not. So, in the end, one piece of advice – be skeptical. Take every information that you get with a bit of suspicion because things might not be quite as it seems.

Thank you for reading!

Source: Create Deepfakes in 5 Minutes with First Order Model Method

Software Development Environments Move to the Cloud

If you’re a newly hired software engineer, setting up your development environment can be tedious. If you’re lucky, your company will have a documented, step-by-step process to follow. But this still doesn’t guarantee you’ll be up and running in no time. When you’re tasked with updating your environment, you’ll go through the same time-consuming process. With different platforms, tools, versions, and dependencies to grapple with, you’ll likely encounter bumps along the way.

Austin-based startup Coder aims to ease this process by bringing development environments to the cloud. “We grew up in a time where [Microsoft] Word documents changed to Google Docs. We were curious why this wasn’t happening for software engineers,” says John A. Entwistle, who founded Coder along with Ammar Bandukwala and Kyle Carberry in 2017. “We thought that if you could move the development environment to the cloud, there would be all sorts of cool workflow benefits.”

With Coder, software engineers access a preconfigured development environment on a browser using any device, instead of launching an integrated development environment installed on their computers. This convenience allows developers to learn a new code base more quickly and start writing code right away.

[…]

Yet cloud-based platforms have their limitations, the most crucial of which is they require reliable Internet service. “We have support for intermittent connections, so if you lose connection for a few seconds, you don’t lose everything. But you do need access to the Internet,” says Entwistle. There’s also the task of setting up and configuring your team’s development environment before getting started on Coder, but once that’s done, you can share your predefined environment with the team.

To ensure security, all source code and related development activities are hosted on a company’s infrastructure—Coder doesn’t host any data. Organizations can deploy Coder on their private servers or on cloud computing platforms such as Amazon Web Services or Google Cloud Platform. This option could be advantageous for banks, defense organizations, and other companies handling sensitive data. In fact, one of Coder’s customers is the U.S. Air Force, and the startup closed a US $30 million Series B funding round last month (bringing its total funding to $43 million), with In-Q-Tel, a venture capital firm with ties to the U.S. Central Intelligence Agency, as one of its backers.

Source: Software Development Environments Move to the Cloud – IEEE Spectrum

Lockdown-Ignoring Sweden Now Has Nordic Europe’s Highest Per-Capita Death Rate and only 7.3% antibodies

Sweden’s death rate per million (376) “is far in advance of Norway’s (44), Denmark’s (96) and Finland’s (55) — countries with similar welfare systems and demographics, but which imposed strict lockdowns…” reports the Guardian, “raising concerns that the country’s light-touch approach to the coronavirus may not be helping it build up broad immunity.”

“According to the scientific online publication Ourworldindata.com, Covid-19 deaths in Sweden were the highest in Europe per capita in a rolling seven-day average between 12 and 19 May. The country’s 6.25 deaths per million inhabitants a day was just above the UK’s 5.75.”

Slashdot reader AleRunner writes: Immunity levels in Sweden, which were expected to reach 33% by the start of May have been measured at only 7.3%, suggesting that Sweden’s lighter lockdown may continue indefinitely whilst other countries begin to revive their economies. Writing about new Swedish antibody results in the Guardian Jon Henley goes on to report that other European countries like Finland are already considering blocking travel from Sweden which may increase Sweden’s long term isolation.

We have discussed before whether Sweden, which locked down earlier than most but with fewer restrictions could be a model for other countries.

As it is, now, the country is looking more like a warning to the rest of the world.
The Guardian concludes that the Swedish government’s decision to avoid a strict lockdown “is thought unlikely to spare the Swedish economy. Although retail and entertainment spending has not collapsed quite as dramatically as elsewhere, analysts say the country will probably not reap any long-term economic benefit.”

Source: Lockdown-Ignoring Sweden Now Has Europe’s Highest Per-Capita Death Rate – Slashdot

A drastic reduction in hardware overhead for quantum computing with new error correcting techniques

A scientist at the University of Sydney has achieved what one quantum industry insider has described as “something that many researchers thought was impossible”.

Dr. Benjamin Brown from the School of Physics has developed a type of error-correcting code for quantum computers that will free up more hardware to do useful calculations. It also provides an approach that will allow companies like Google and IBM to design better quantum microchips.

He did this by applying already known code that operates in three-dimensions to a two-dimensional framework.

“The trick is to use time as the third dimension. I’m using two physical dimensions and adding in time as the third dimension,” Dr. Brown said. “This opens up possibilities we didn’t have before.”

His research is published today in Science Advances.

“It’s a bit like knitting,” he said. “Each row is like a one-dimensional line. You knit row after row of wool and, over time, this produces a two-dimensional panel of material.”

Fault-tolerant quantum computers

Reducing errors in is one of the biggest challenges facing scientists before they can build machines large enough to solve useful problems.

“Because quantum information is so fragile, it produces a lot of errors,” said Dr. Brown, a research fellow at the University of Sydney Nano Institute.

Completely eradicating these errors is impossible, so the goal is to develop a “fault-tolerant” architecture where useful processing operations far outweigh error-correcting operations.

“Your mobile phone or laptop will perform billions of operations over many years before a single error triggers a blank screen or some other malfunction. Current quantum operations are lucky to have fewer than one error for every 20 operations—and that means millions of errors an hour,” said Dr. Brown who also holds a position with the ARC Centre of Excellence for Engineered Quantum Systems.

“That’s a lot of dropped stitches.”

Most of the building blocks in today’s experimental quantum computers—quantum bits or qubits—are taken up by the “overhead” of .

“My approach to suppressing errors is to use a code that operates across the surface of the architecture in two dimensions. The effect of this is to free up a lot of the hardware from error correction and allow it to get on with the useful stuff,” Dr. Brown said.

Dr. Naomi Nickerson is Director of Quantum Architecture at PsiQuantum in Palo Alto, California, and unconnected to the research. She said: “This result establishes a new option for performing fault-tolerant gates, which has the potential to greatly reduce overhead and bring practical quantum computing closer.”

Source: A stitch in time: How a quantum physicist invented new code from old tricks

More information: Science Advances (2020). DOI: 10.1126/sciadv.eaay4929 , advances.sciencemag.org/content/6/21/eaay4929

Breathing Habits Are Related To Physical and Mental Health

Breathing is a missing pillar of health, and our attention to it is long overdue. Most of us misunderstand breathing. We see it as passive, something that we just do. Breathe, live; stop breathing, die. But breathing is not that simple and binary. How we breathe matters, too. Inside the breath you just took, there are more molecules of air than there are grains of sand on all the world’s beaches. We each inhale and exhale some 30 pounds of these molecules every day — far more than we eat or drink. The way that we take in that air and expel it is as important as what we eat, how much we exercise and the genes we’ve inherited. This idea may sound nuts, I realize. It certainly sounded that way to me when I first heard it several years ago while interviewing neurologists, rhinologists and pulmonologists at Stanford, Harvard and other institutions. What they’d found is that breathing habits were directly related to physical and mental health.

Today, doctors who study breathing say that the vast majority of Americans do it inadequately. […] But it’s not all bad news. Unlike problems with other parts of the body, such as the liver or kidneys, we can improve the airways in our too-small mouths and reverse the entropy in our lungs at any age. We can do this by breathing properly. […] [T]he first step in healthy breathing: extending breaths to make them a little deeper, a little longer. Try it. For the next several minutes, inhale gently through your nose to a count of about five and then exhale, again through your nose, at the same rate or a little more slowly if you can. This works out to about six breaths a minute. When we breathe like this we can better protect the lungs from irritation and infection while boosting circulation to the brain and body. Stress on the heart relaxes; the respiratory and nervous systems enter a state of coherence where everything functions at peak efficiency. Just a few minutes of inhaling and exhaling at this pace can drop blood pressure by 10, even 15 points. […] [T]he second step in healthy breathing: Breathe through your nose. Nasal breathing not only helps with snoring and some mild cases of sleep apnea, it also can allow us to absorb around 18% more oxygen than breathing through our mouths. It reduces the risk of dental cavities and respiratory problems and likely boosts sexual performance. The list goes on.

Source: Breathing Habits Are Related To Physical and Mental Health – Slashdot

Linux not Windows: Why Munich is shifting back from Microsoft to open source – again

In a notable U-turn for the city, newly elected politicians in Munich have decided that its administration needs to use open-source software, instead of proprietary products like Microsoft Office.

“Where it is technologically and financially possible, the city will put emphasis on open standards and free open-source licensed software,” a new coalition agreement negotiated between the recently elected Green party and the Social Democrats says.

The agreement was finalized Sunday and the parties will be in power until 2026. “We will adhere to the principle of ‘public money, public code’. That means that as long as there is no confidential or personal data involved, the source code of the city’s software will also be made public,” the agreement states.

The decision is being hailed as a victory by advocates of free software, who see this as a better option economically, politically, and in terms of administrative transparency.

However, the decision by the new coalition administration in Germany’s third largest and one of its wealthiest cities is just the latest twist in a saga that began over 15 years ago in 2003, spurred by Microsoft’s plans to end support for Windows NT 4.0.

Because the city needed to find a replacement for aging Microsoft Windows workstations, Munich eventually began the move away from proprietary software at the end of 2006.

At the time, the migration was seen as an ambitious, pioneering project for open software in Europe. It involved open-standard formats, vendor-neutral software and the creation of a unique desktop infrastructure based on Linux code named ‘LiMux’ – a combination of Linux and Munich.

By 2013, 80% of desktops in the city’s administration were meant to be running LiMux software. In reality, the council continued to run the two systems – Microsoft and LiMux – side by side for several years to deal with compatibility issues.

As the result of a change in the city’s government, a controversial decision was made in 2017 to leave LiMux and move back to Microsoft by 2020. At the time, critics of the decision blamed the mayor and deputy mayor and cast a suspicious eye on the US software giant’s decision to move its headquarters to Munich.

In interviews, a former Munich mayor, under whose administration the LiMux program began, has been candid about the efforts Microsoft went to to retain their contract with the city.

The migration back to Microsoft and to other proprietary software makers like Oracle and SAP, costing an estimated €86.1m ($93.1m), is still in progress today.

“We’re very happy that they’re taking on the points in the ‘Public Money, Public Code’ campaign we started two and a half years ago,” Alex Sander, EU public policy manager at the Berlin-based Free Software Foundation Europe, tells ZDNet. But it’s also important to note that this is just a statement in a coalition agreement outlining future plans, he says.

“Nothing will change from one day to the next, and we wouldn’t expect it to,” Sander continued, noting that the city would also be waiting for ongoing software contracts to expire. “But the next time there is a new contract, we believe it should involve free software.”

Any such step-by-step transition can be expected to take years. But it is also possible that Munich will be able to move faster than most because they are not starting from zero, Sander noted. It can be assumed that some LiMux software is still in use and that some of the staff there would have used it before.

[…]

Source: Linux not Windows: Why Munich is shifting back from Microsoft to open source – again | ZDNet

Libraries Have Never Needed Permission To Lend Books, And The Move To Change That Is A Big Problem

There are a variety of opinions concerning the Internet Archive’s National Emergency Library in response to the pandemic. I’ve made it clear in multiple posts why I believe the freakout from some publishers and authors is misguided, and that the details of the program are very different than those crying about it have led you to believe. If you don’t trust my analysis and want to whine about how I’m biased, I’d at least suggest reading a fairly balanced review of the issues by the Congressional Research Service.

However, Kyle Courtney, the Copyright Advisor for Harvard University, has a truly masterful post highlighting not just why the NEL makes sense, but just how problematic it is that many — including the US Copyright Office — seem to want to move to a world of permission and licensing for culture that has never required such things in the past.

Licensing culture is out of control. This has never been clearer than during this time when hundreds of millions of books and media that were purchased by libraries, archives, and other cultural intuitions have become inaccessible due to COVID-19 closures or, worse, are closed off further by restrictive licensing.

What’s really set Courtney off is that the Copyright Office has come out, in response to the NEL, to suggest that the solution to any such concerns raised by books being locked up by the pandemic must be more licensing:

The ultimate example of this licensing culture gone wild is captured in a recent U.S. Copyright Office letter. Note that this letter is not a legally binding document. It is the opinion of an office under the control of the Library of Congress, that is tasked among other missions, with advising Congress when they ask copyright questions, as in this case.

Senator Tom Udall asked the Copyright Office to give its legal analysis of the NEL and similar library efforts, and it did so… badly.

The Office responded with a letter revealing their recommendation was not going to be the guidance document to “help libraries, authors, and online outlets,” but, ultimately, called for more licensing. It also continued a common misunderstanding of an important case, Capitol Records, LLC v. ReDigi Inc., 910 F. 3d 649 (2d Cir 2018).

We’ve written about the Redigi case a few times, but as Courtney details, the anti-internet, pro-extreme copyright folks have embraced it to mean much more than it actually means (we’ll get back that shortly). Courtney points out that the Copyright Office seems to view everything through a single lens: “licensing” (i.e., permission). So while the letter applauds more licensing, that’s really just a celebration of greater permission when none is necessary. And through that lens the Copyright Office seems to think that the NEL isn’t really necessary because publishers have been choosing to make some of their books more widely available (via still restrictive licensing). But, as Courtney explains, libraries aren’t supposed to need permission:

Here’s the problem though: these vendors and publishers are not libraries. The law does not treat them the same. Vendors must must ask permission, they must license, this is their business model. Libraries are special creatures of copyright. Libraries have a legally authorized mandate granted by Congress to complete their mission to provide access to materials. They put many of these in copyright exemptions for libraries in the Copyright Act itself.

The Copyright Office missed this critical difference completely when it said digital, temporary, or emergency libraries should “seek permission from authors or publishers prior” to the use. I think think this is flat-out wrong. And I have heard this in multiple settings over the last few months: somehow it has crept into our dialog that libraries should have always sought a license to lend books, even digital books, exactly like the vendors and publishers who sought permission first. Again, this is fundamentally wrong.

Let me make this clear: Libraries do not need a license to loan books. What libraries do (give access to their acquired collections of acquired books) is not illegal. And libraries generally do not need to license or contract before sharing these legally acquired works, digital or not. Additionally, libraries, and their users, can make (and do make) many uses of these works under the law including interlibrary loan, reserves, preservation, fair use, and more!

[…]

Source: Libraries Have Never Needed Permission To Lend Books, And The Move To Change That Is A Big Problem | Techdirt

Expedia Group CEO Peter Kern: ‘Google’s a problem for everyone who sells something online’ – yup, monopolies are bad

Expedia Group’s new CEO isn’t mincing words about one of the company’s biggest challenges: Google’s dual role as a rival in online travel, and a key source of customers through search traffic and paid advertising.

“I think Google’s a problem — it’s a problem for everyone who sells something online, and we all have to struggle with that,” Peter Kern said during an appearance on CNBC on Friday morning, following his first earnings report as the CEO of the Seattle-based online travel giant.

His comments come amid reports that U.S. antitrust regulators are preparing a case against the search giant, focusing on its dominance of digital ads.

This Google conundrum is a recurring topic for Expedia Group, but Kern appears to be taking a different approach than his predecessor Mark Okerstrom did before he was ousted from the role last fall. Appearing on CNBC this morning, Kern says Expedia needs to learn to rely less on performance marketing, a form of advertising in which the cost is based on a specific outcome such as a click or sales lead.

“We just haven’t been as good on some of the basic blocking and tackling things that allow you to rely less on Google. And so we’ve used Google and performance marketing as our primary lever of whether we could grow at a certain rate or not, but we haven’t been great at merchandising, we haven’t been great at understanding the customer. We never had data across all our brands to understand if a customer had been at another of our brands and moved to a different one. We often competed in performance marketing auctions, our own brands against themselves. So we have a lot of our own work to do and to my eye, that means we have a lot of upside that is fully not reliant on Google or performance marketing.”

Expedia also addressed the Google issue in its quarterly regulatory filing: “In addition to the growth of online travel agencies, we see increased interest in the online travel industry from search engine companies such as Google, evidenced by continued product enhancements, including new trip planning features for users and the integration of its various travel products into the Google Travel offering, as well as further prioritizing its own products in search results.”

Source: Expedia Group CEO Peter Kern: ‘Google’s a problem for everyone who sells something online’ – GeekWire

Here is my May 2019 video about why the tech giants need breaking up

It wasn’t just a few credit cards: Entire travel itineraries were stolen by hackers, Easyjet now tells victims

Victims of the Easyjet hack are now being told their entire travel itineraries were accessed by hackers who helped themselves to nine million people’s personal details stored by the budget airline.

As reported earlier this week, the data was stolen from the airline between October 2019 and January this year. Easyjet kept quiet about the hack until mid-May, though around 2,200 people whose credit card details were stolen during the cyber-raid were told of this in early April, months after the attack.

Today emails from the company began arriving with customers. One seen by The Register read:

Our investigation found that your name, email address, and travel details were accessed for the easyJet flights or easyJet holidays you booked between 17th October 2019 and 4th March 2020. Your passport and credit card details were not accessed, however information including where you were travelling from and to, your departure date, booking reference number, the booking date and the value of the booking were accessed.

We are very sorry this has happened.

It also warned victims to be on their guard against phishing attacks by miscreants using the stolen records, especially if any “unsolicited communications” arrived appearing to be from Easyjet or its package holidays arm.

Perhaps to avoid spam filters triggered by too many links, the message mentioned, but did not link to, a blog post from the Information Commissioner’s Office titled, “Stay one step ahead of the scammers,” as well as one from the National Cyber Security Centre, published last year, headed: “Phishing attacks: dealing with suspicious emails and messages.”

There was no mention in the message to customers of compensation being paid as a result of the hack. Neither, when El Reg asked earlier this week, did Easyjet address the question of compo or credit monitoring services.

Source: It wasn’t just a few credit cards: Entire travel itineraries were stolen by hackers, Easyjet now tells victims • The Register

‘Taste Display’ Brings Fake Flavors to Your Tongue

a researcher from Meiji University in Japan who’s invented what’s being described as a taste display that can artificially recreate any flavor by triggering the five different tastes on a user’s tongue.

Years ago it was thought that the tongue had different regions for tasting sweet, sour, salty, and bitter flavors, where higher concentrations of taste buds tuned to specific flavors were found. We now know that the distribution is more evenly spread out across the tongue, and that a fifth flavor, umami, plays a big part in our enjoyment of food. Our better understanding of how the tongue works is crucial to a new prototype device that its creator, Homei Miyashita, calls the Norimaki Synthesizer.

It was inspired by how easily our eyes can be tricked into seeing something that technically doesn’t exist. The screen you’re looking at uses microscopic pixels made up of red, green, and blue elements that combine in varying intensities to create full-color images. Miyashita wondered if a similar approach could be used to trick the tongue, which is why their Norimaki Synthesizer is also referred to as a taste display.

There have been many attempts to artificially simulate tastes on the tongue with and without the presence of food, but they tend to focus on a specific taste, or enhancing a single flavor, such as boosting how salty something tastes without actually having to add more salt. The Norimaki Synthesizer takes a more aggressive approach through the use of five gels that trigger the five different tastes when they make contact with the human tongue.

The color-coded gels, made from agar formed in the shape of long tubes, use glycine to create the taste of sweet, citric acid for acidic, sodium chloride for salty, magnesium chloride for bitter, and glutamic sodium for savory umami. When the device is pressed against the tongue, the user experiences all five tastes at the same time, but specific flavors are created by mixing those tastes in specific amounts and intensities, like the RGB pixels on a screen. To accomplish this, the prototype is wrapped in copper foil so that when it’s held in hand and touched to the surface of the tongue, it forms an electrical circuit through the human body, facilitating a technique known as electrophoresis.

Electrophoresis is a process that moves molecules in a gel when an electrical current is applied, allowing them to be sorted by size based on the size of pores in the gel. But here the process simply causes the ingredients in the agar tubes to move away from the end touching the tongue, which reduces the tongue’s ability to taste them. It’s a subtractive process that selectively removes tastes to create a specific flavor profile. In testing, the Norimaki Synthesizer has allowed users to experience the flavor of everything from gummy candy to sushi without having to place a single item of food in their mouths.

In its current form the prototype is a bit bulky, but it could be easily miniaturized to a device as compact as the vapes everyone is already carrying around and regularly using. But instead of simulating the experience and flavors of smoking, it could recreate the satisfying feeling of eating a piece of chocolate, or drinking a milkshake, without having to ingest a single calorie.

Source: ‘Taste Display’ Brings Fake Flavors to Your Tongue

NASA launches guide to Lunar etiquette now that private operators will share the Moon with governments after US power grab

NASA has laid out a new set of principles that it hopes will inform how states and private companies will interact on the Moon.

The new guidelines, called the Artemis Accords, seek “to create a safe and transparent environment which facilitates exploration, science, and commercial activities for the benefit of humanity”.

The purpose of the Accords appears to be establishing a rough agreement on how space agencies and private companies conduct themselves in space without having to make a formal treaty, which can take decades to come into effect.

“With numerous countries and private sector players conducting missions and operations in cislunar space, it’s critical to establish a common set of principles to govern the civil exploration and use of outer space,” the space agency said.

Some of what the US space agency proposes in the Accords is already covered in previously established frameworks. For example, the Outer Space Treaty of 1967 mandates that space be used for peaceful purposes and prohibits testing and placing weapons of mass destruction on the Moon and other celestial bodies.

The new agreement reiterates this: “International cooperation on Artemis is intended not only to bolster space exploration but to enhance peaceful relationships between nations. Therefore, at the core of the Artemis Accords is the requirement that all activities will be conducted for peaceful purposes, per the tenets of the Outer Space Treaty.”

But although the Outer Space Treaty says nations cannot claim or own property in space, it does not directly address newer space activities such as lunar and asteroid mining. Many states see the Moon as a key strategic asset in outer space, and several companies, including NASA, have proposed mining rocket fuel from planets and asteroids.

The Accords therefore clarify that space agencies can extract and use resources they find in space. “The ability to utilise resources on the Moon, Mars, and asteroids will be critical to support safe and sustainable space exploration and development,” the guideline reads.

The principle is consistent with an executive order that President Trump signed in April, signalling that the US would pursue a policy to “encourage international support for the public and private recovery and use of resources in outer space.”

The Accords also seek to establish so-called safety zones that would surround future moon bases and prevent “harmful interference” from rival countries or companies operating in close proximity. How the size of these safety zones will be determined was not explained.

Agencies that sign the agreement will be required to publicly share their scientific data and be transparent about their operations, “to ensure that the entire world can benefit from the Artemis journey of exploration and discovery.” They’ll also be required to manage their own orbital rubbish, such as end-of-life spacecrafts. Historic sites, such as the Apollo landing site, would also be protected under the agreement.

But not everybody is happy about the new provisions. Dmitry Rogozin, the head of Russia’s space agency has criticised Washington for excluding Russia from early discussions about the space explorations act. “The principle of invasion is the same, whether it be the Moon or Iraq,” he tweeted.

China, which is pursuing its own space program, told Reuters it was willing to cooperate with all parties on lunar exploration “to make a greater contribution in building a community with [a] shared future for mankind”.

The Artemis program aims to put the first woman and second man on the Moon by 2024. NASA is collaborating with several space agencies on the effort, including Elon Musk’s SpaceX and Jeff Bezos’s Blue Origin.

Source: NASA launches guide to Lunar etiquette now that private operators will share the Moon with governments • The Register

New Spectra attack breaks the separation between Wi-Fi and Bluetooth

Called Spectra, this attack works against “combo chips,” specialized chips that handle multiple types of radio wave-based wireless communications, such as Wi-Fi, Bluetooth, LTE, and others.

“Spectra, a new vulnerability class, relies on the fact that transmissions happen in the same spectrum, and wireless chips need to arbitrate the channel access,” the research team said today in a short abstract detailing an upcoming Black Hat talk.

More particularly, the Spectra attack takes advantage of the coexistence mechanisms that chipset vendors include with their devices. Combo chips use these mechanisms to switch between wireless technologies at a rapid pace.

Researchers say that while these coexistence mechanisms increase performance, they also provide the opportunity to carry out side-channel attacks and allow an attacker to infer details from other wireless technologies the combo chip supports.

[…]

“We specifically analyze Broadcom and Cypress combo chips, which are in hundreds of millions of devices, such as all iPhones, MacBooks, and the Samsung Galaxy S series,” the two said.

[…]

“In general, denial-of-service on spectrum access is possible. The associated packet meta information allows information disclosure, such as extracting Bluetooth keyboard press timings within the Wi-Fi D11 core,” Classen and Gringoli say.

“Moreover, we identify a shared RAM region, which allows code execution via Bluetooth in Wi-Fi. This makes Bluetooth remote code execution attacks equivalent to Wi-Fi remote code execution, thus, tremendously increasing the attack surface.

Source: New Spectra attack breaks the separation between Wi-Fi and Bluetooth | ZDNet

Nextdoor Building Relationships With Law Enforcement whilst racially profiling

Community platform Nextdoor is courting police across the country, creating concerns among civil rights and privacy advocates who worry about possible conflicts of interest, over-reporting of crime, and the platform’s record of racial profiling, per a Thursday report by CityLab.

That effort included an all-expenses-paid meeting in San Francisco with members of Nextdoor’s Public Agencies Advisory Council, which includes community engagement staffers from eight police departments and mayor’s offices, according to CityLab. Other outreach has included enlisting current and former law enforcement officers to promote the app, as well as partnerships with local authorities that enable them to post geo-targeted messages to neighborhoods and receive unofficial reports of suspicious activity through the app. According to CityLab, attendees of the meeting in San Francisco had to sign nondisclosure agreements that could shield information on the partnerships from the public.

[…]

Nextdoor has “crime and safety” functions that allow locals to post unverified information about suspicious activity and suspected crimes, acting as a sort of loosely organized neighborhood self-surveillance system for users. That raises the possibility Nextdoor is facilitating racial profiling and over-policing, especially given its efforts to build relationships with authorities and its booming user base (reportedly past 10 million). During the ongoing coronavirus pandemic, Nextdoor has seen skyrocketing user engagement—an 80 percent increase, founder Prakash Janakiraman told Vanity Fair earlier this month.

“There are compelling reasons for transparency around the activities of public employees in general, but the need for transparency is at its height when it comes to law enforcement agencies,” ACLU Speech, Privacy, and Technology Project staff attorney Freed Wessler told CityLab. “It would be quite troubling to learn that police officers were investigating and arresting people using data from private companies with which they have signed an NDA.”

Nextdoor and its fellow security and safety apps, including Amazon’s Ring doorbell camera platform and the crime-reporting app Citizen, are also implicitly raising fears of widespread crime at a time when national statistics show crime rates have plummeted across the country, Secure Justice executive director and chair of Oakland’s Privacy Advisory Commission Brian Hofer told CityLab. Nextdoor marketing materials, for example, assert that Nextdoor played a role in crime reduction in Sacramento.

Source: Report: Nextdoor Building Relationships With Law Enforcement

Researchers Control Monkeys’ Decisions With Bursts of Ultrasonic Waves

New research published today in Science Advances suggests pulses of ultrasonic waves can be used to partially control decision-making in rhesus macaque monkeys. Specifically, the ultrasound treatments were shown to influence their decision to look either left or right at a target presented on a screen, despite prior training to prefer one target over the other.

The new study, co-authored by neuroscientist Jan Kubanek from the University of Utah, highlights the potential use of this non-invasive technique for treating certain disorders in humans, like addictions, without the need for surgery or medication. The procedure is also completely painless.

Scientists had previously shown that ultrasound can stimulate neurons in the brains of mice, including tightly packed neurons deep in the brain. By modulating neuronal activity in mice, researchers could trigger various muscle movements across their bodies. That said, other research has been less conclusive about this and whether high-frequency sound waves can trigger neuromodulatory effects in larger animals.

The new research suggests they can, at least in a pair of macaque monkeys.

Source: Researchers Control Monkeys’ Decisions With Bursts of Ultrasonic Waves

UK takes a step closer to domestic launches as Skyrora fires up Skylark-L

Blighty is preparing for take-off as Edinburgh-based rocket-botherer Skyrora test-fired its Skylark-L rocket from a location in the heart of the Scottish Highlands.

Those hoping to send a satellite to orbit from UK soil might have a while to wait, however. The Skylark-L is only capable of flinging a 60kg payload 100km up. The beefier Skyrora XL will be capable of carrying far greater payloads into Low Earth Orbit (LEO) by 2023.

The test, which occurred earlier this month at the Kildermorie Estate in North Scotland, saw the Skylark-L vehicle erected, fuelled and ignited. The rocket was held down while engineers checked systems were behaving as they should.

The team made much of the fact that it had built a mobile launch complex and tested a rocket within five days.

[… snarky bit …].

Skylark-L on mobile launch pad (pic: Skyrora)

Light the blue touchpaper, then stand well back Pic: Skyrora

Click to enlarge

A company representative told The Register that the five days also included digging the flame trench visible above.

[… more snarky stuff…]

the endeavour still represents the first complete ground rocket test in the UK since the glory days of the Black Arrow, some 50 years ago.Prior to the static firing, the 30kN engine had been through three hot fires before integration. It was fuelled by a combination of hydrogen peroxide and kerosene (to be replaced by the company’s own Ecosene, made from plastic waste). The Skylark-L itself was then mounted on a transporter-erector that was fixed to a trailer.

“It is very hard to oversell what we have achieved here,” said operations leader Dr Jack-James Marlow, before trying his hardest to do so: “The whole team has pulled through again to deliver another UK first. We have successfully static tested a fully integrated, sub-orbital Skylark L launch vehicle in flight configuration. This means we performed all actions of a launch but did not release the vehicle.”

While the test was indeed a complete success, and validated both the vehicle and its ground systems, there is still a while to wait before a Skylark-L is launched. The company put that first flight from a British spaceport as being “as early as spring 2021”. CEO Volodymyr Levykin added: “We are now in a full state of readiness for launch.”

Source: UK takes a step closer to domestic launches as Skyrora fires up Skylark-L • The Register

Good luck to them!