Paralyzed man moves robotic arm with his thoughts

[…] He was able to grasp, move and drop objects just by imagining himself performing the actions.

The device, known as a brain-computer interface (BCI), worked for a record 7 months without needing to be adjusted. Until now, such devices have only worked for a day or two.

The BCI relies on an AI model that can adjust to the small changes that take place in the brain as a person repeats a movement — or in this case, an imagined movement — and learns to do it in a more refined way.

[…]

The study, which was funded by the National Institutes of Health, appears March 6 in Cell.

The key was the discovery of how activity shifts in the brain day to day as a study participant repeatedly imagined making specific movements. Once the AI was programmed to account for those shifts, it worked for months at a time.

Location, location, location

Ganguly studied how patterns of brain activity in animals represent specific movements and saw that these representations changed day-to-day as the animal learned. He suspected the same thing was happening in humans, and that was why their BCIs so quickly lost the ability to recognize these patterns.

[…]

he participant’s brain could still produce the signals for a movement when he imagined himself doing it. The BCI recorded the brain’s representations of these movements through the sensors on his brain.

Ganguly’s team found that the shape of representations in the brain stayed the same, but their locations shifted slightly from day to day.

From virtual to reality

Ganguly then asked the participant to imagine himself making simple movements with his fingers, hands or thumbs over the course of two weeks, while the sensors recorded his brain activity to train the AI.

Then, the participant tried to control a robotic arm and hand. But the movements still weren’t very precise.

So, Ganguly had the participant practice on a virtual robot arm that gave him feedback on the accuracy of his visualizations. Eventually, he got the virtual arm to do what he wanted it to do.

Once the participant began practicing with the real robot arm, it only took a few practice sessions for him to transfer his skills to the real world.

He could make the robotic arm pick up blocks, turn them and move them to new locations. He was even able to open a cabinet, take out a cup and hold it up to a water dispenser.

[…]

Source: Paralyzed man moves robotic arm with his thoughts | ScienceDaily

Still can’t access your Outlook mailbox? You aren’t alone

Problems with Outlook.com are continuing, with users reporting being unable to access their emails or authenticate themselves.

Part of the issue appears to be related to the initial wobble over the weekend. Some of the users affected by that outage were locked out of their accounts after repeated login failures, and Microsoft’s status center for Microsoft 365 continues to report that users might be unable to access their email using the native mail app on iOS devices.

As of today, issues persist, and Microsoft has promised another update by 2300 UTC. On the plus side, the current status has changed from “We’re analyzing available data and attempting to determine the underlying source for users’ problems” to “Our analysis of available data is ongoing as we attempt to determine the underlying source of users’ problems.”

[…]

Source: Still can’t access your Outlook mailbox? You aren’t alone • The Register

Mistral adds a new API that turns any PDF document into an AI-ready Markdown file with pictures

Unlike most OCR APIs, Mistral OCR is a multimodal API, meaning that it can detect when there are illustrations and photos intertwined with blocks of text. The OCR API creates bounding boxes around these graphical elements and includes them in the output.

Mistral OCR also doesn’t just output a big wall of text; the output is formatted in Markdown, a formatting syntax that developers use to add links, headers, and other formatting elements to a plain text file.

LLMs rely heavily on Markdown for their training datasets. Similarly, when you use an AI assistant, such as Mistral’s Le Chat or OpenAI’s ChatGPT, they often generate Markdown to create bullet lists, add links, or put some elements in bold.

[…]

Mistral OCR is available on Mistral’s own API platform or through its cloud partners (AWS, Azure, Google Cloud Vertex, etc.). And for companies working with classified or sensitive data, Mistral offers on-premise deployment.

[…]

Companies and developers will most likely use Mistral OCR with a RAG (aka Retrieval-Augmented Generation) system to use multimodal documents as input in an LLM. And there are many potential use cases. For instance, we could envisage law firms using it to help them swiftly plough through huge volumes of documents.

RAG is a technique that’s used to retrieve data and use it as context with a generative AI model.

Source: Mistral adds a new API that turns any PDF document into an AI-ready Markdown file | TechCrunch