Diamond’s all-electric eDA40 completes maiden flight

Diamond Aircraft announced that it has successfully completed the first flight of its eDA40 electric aircraft.

The eDA40 is an all-electric, battery-powered version of the popular DA40 light aircraft, which is one of the best-selling piston aircraft and is used by flight schools and private owners all over the world.

The eDA40’s maiden flight took place on July 26, 2023, in the skies over the company’s headquarters in Wiener Neustadt, Austria. During the flight Diamond’s Head of Flight Test, Sören Pedersen, performed several system checks and undertook basic maneuvers.

“The aircraft performed outstandingly well during its maiden flight and not only met but exceeded all our expectations,” said Diamond CEO, Liqun (Frank) Zhang.

The eDA40 is powered by a Safran ENGINeUS electric motor and a battery module made by Electric Power Systems (EPS).

Flight autonomy is expected to be up to 90 minutes, with a charging time from depleted to full battery of around 20 minutes using a DC fast charging system. Diamond Aircraft claims the eDA40’s operating costs will be 40% below those of a traditional piston engine aircraft of similar size.

The Austrian manufacturer is seeking Part 23 certification from the European Union Aviation Safety Agency (EASA) and the US Federal Aviation Administration (FAA) for the eDA40.

The aircraft will be publicly presented at the AERO Friedrichshafen 2024 air show, which will take place in the southern German city in April 2024.

Source: Diamond’s all-electric eDA40 completes maiden flight – AeroTime

AI Creation and Copyright: Unraveling the debate on originality, ownership

Whether AI systems can create original work sparks intense discussions among philosophers, jurists, and computer scientists and touches on issues of ownership, copyright, and economic competition, writes Stefano Quintarelli.

Stefano Quintarelli is an information technology specialist, a former member of the Italian Parliament and a former member of the European Commission’s High-level expert group on Artificial Intelligence.

Did Jesus Christ own the clothing he wore? In Umberto Eco’s landmark book “The Name of the Rose,” this is the central issue that is hotly debated by senior clergy, leading to internecine warfare. What is at stake is nothing less than the legitimacy of the church to own private property — and for clergy to grow rich in the process — since if Jesus did it then it would be permitted for his faithful servants.

Although it may not affect the future of an entity quite as dominant as the Catholic church, a similarly vexatious question is sparking heated debate today: Namely, can artificial intelligence systems create original work, or are they just parrots who repeat what they are told? Is what they do similar to human intelligence, or are they just echoes of things that have already been created by others?

The recent boom in generative artificial intelligence (AI) tools such as ChatGPT has spurred a flurry of multilateral initiatives as regulators attempt to respond to the breakneck pace of development of AI systems, write Carisa Nietsche and Camille Ford.

In this case, the debate is not among senior clergy, but philosophers, jurists and computer scientists (those who specialise in the workings of the human brain seem to be virtually absent from the discussion). Instead of threatening the wealth of the church, the answer to the question of machine intelligence raises issues that affect the ownership and wealth that flows from all human works.

Large language models (LLMs) such as Bard and ChatGPT are built by ingesting huge quantities of written material from the internet and learning to make connections and correlations between the words and phrases in those texts. The question is: When an AI engine produces something, is it generating a new creative work, as a human would, or is it merely generating a derivative work?

If the answer is that a machine does not ‘learn’ and therefore only synthesises or parrots existing work and does not ‘create’ then for legal and copyright purposes, its output could be considered a work derived from existing texts and therefore not its own creative work with all the rights that would be included.

In the early years of the commercial web, there was a similar debate over whether hyperlinks and short excerpts from articles or web pages should be considered derivative works. Those who believed they were, argued that Google should have to pay royalties on those links and excerpts when it included them in its search results.

My position at the time was that links with short excerpts should not be considered a derivative work, but rather a new kind of service that helped bring those works to a different audience, and therefore didn’t compete with the economic interests of the authors of those works or owners of those sites. Not only did the links or excerpts not cause them harm, they did the exact opposite.

Bard, Google’s eagerly awaited response to ChatGPT, was launched in Europe on Thursday (13 July), following delays in complying with the EU’s data protection rules.

This argument and extensions of it formed the basis for the birth of economic giants such as Google and Facebook, which could not have existed if they had to pay a ‘link tax’ for the content they indexed and linked to (although recent laws in countries such as Australia and Canada have changed that to some extent, and have forced Google and Facebook to pay newspapers for linking to their content).

But large language models don’t just produce links or excerpts. The responses they provide don’t lead the user to the original texts or sites, but instead become a substitute for them. The audience is arguably the same, and therefore there is undoubtedly economic competition. A large language model could become the only interface for access to and economic exploitation of that information.

It seems obvious that this will become a political issue in both the US and Europe, although the question of its legality could result in different answers, since the United States has a legal tradition of ‘fair use’, which allows companies such as Google to use work in various ways without having to license it, and without infringing on an owner’s copyright.

In Europe, no such tradition exists (British Commonwealth countries have a similar concept called ‘fair dealing,’ but it is much weaker).

It’s probably not a coincidence that the companies that created these AI engines are reluctant to say what texts or content the models were built on, since transparency could facilitate possible findings of copyright infringement (a number of prominent authors are currently suing OpenAI, owner of ChatGPT, because they believe their work was ingested by its large language model without permission).

The rise of generative AI models like ChatGPT and Midjourney AI, able to produce incredibly realistic content, poses an unprecedented challenge to the creative sector. We discuss what this new generation of Artificial Intelligence means for this sector and human …

A problem within the problem is that the players who promote these systems typically enjoy dominant positions in their respective markets, and are therefore not subject to special obligations to open up or become transparent. This is what happened with the ‘right to link’ that led the web giants to become gatekeepers — the freedom to link or excerpt created huge value that caused them to become dominant.

It’s not clear that the solution to these problems is to further restrict copyright so as to limit the creation of new large language models. In thinking about what measures to apply and how to evolve copyright in the age of artificial intelligence, we ought to think about rules that will also help open up downstream markets, not cement the market power that existing gatekeepers already have.

When the creators of AI talk about building systems that are smarter than humans, and defend their models as more than just ‘stochastic parrots’ who repeat whatever they are told, we need to keep in mind that these are more than just purely philosophical statements. There are significant economic interests involved, including the future exploitation of the wealth of information produced to date. And that rivals anything except possibly the wealth of the Catholic Church in the 1300’s, when Eco’s hero, William Of Baskerville, was asking questions about private property.

Source: AI Creation and Copyright: Unraveling the debate on originality, ownership – EURACTIV.com

US-EU AI Code of Conduct: First Step Towards Transatlantic Pillar of Global AI Governance?

The European Union, G7, United States, and United Kingdom have announced initiatives aiming to establish governance regimes and guidelines around the technology’s use.

Amidst these efforts, an announcement made in late May by EU Executive Vice-President Margrethe Vestager at the close of the Fourth Trade and Technology Council (TTC) Ministerial in Sweden revealed an upcoming U.S.-EU “AI Code of Conduct.”

This measure represents a first step in laying the transatlantic foundations for global AI governance.

The AI Code of Conduct was presented as a joint U.S.-EU initiative to produce a draft set of voluntary commitments for businesses to adopt. It aims to bridge the gap between different jurisdictions by developing a set of non-binding international standards for companies developing AI systems ahead of legislation being passed in any country.

The initiative aims to go beyond the EU and U.S. to eventually involve other countries, including Indonesia and India, and ultimately be presented before the G7.

At present, questions remain surrounding the scope of the Code of Conduct and whether it will contain monitoring or enforcement mechanisms.

The AI Code of Conduct – coupled with other TTC deliverables emerging from the U.S.-EU Joint AI Roadmap – signals a path forward for the emergence of a transatlantic pillar of global AI governance.

Importantly, this approach circumvents questions of regulatory alignment and creates room for a broader set of multilateral actors, as well as the private sector.

[…]

Source: US-EU AI Code of Conduct: First Step Towards Transatlantic Pillar of Global AI Governance? – EURACTIV.com

Android phones can now tell you if there’s an AirTag following you

When Google announced that trackers would be able to tie in to its 3 billion-device Bluetooth tracking network at its Google I/O 2023 conference, it also said that it would make it easier for people to avoid being tracked by trackers they don’t know about, like Apple AirTags.

Now Android users will soon get these “Unknown Tracker Alerts.” Based on the joint specification developed by Google and Apple, and incorporating feedback from tracker-makers like Tile and Chipolo, the alerts currently work only with AirTags, but Google says it will work with tag manufacturers to expand its coverage.

Android’s unknown tracker alerts, illustrated in moving Corporate Memphis style.

For now, if an AirTag you don’t own “is separated from its owner and determined to be traveling with you,” a notification will tell you this and that “the owner of the tracker can see its location.” Tapping the notification brings up a map tracing back to where it was first seen traveling with you. Google notes that this location data “is always encrypted and never shared with Google.”

Finally, Google offers a manual scan feature if you’re suspicious that your Android phone isn’t catching a tracker or want to see what’s nearby. The alerts are rolling out through a Google Play services update to devices on Android 6.0 and above over the coming weeks.

[…]

Source: Android phones can now tell you if there’s an AirTag following you