Plato Data Intelligence.
Vertical Search & Ai.

Google Brain, Intel, and UC Berkeley train robotic surgery AI with videos

Date:

Researchers from Google Brain, Intel AI Lab, and UC Berkeley have created Motion2Vec, an AI model capable of learning how to do tasks associated with robotic surgery — such as suturing, needle-passing, needle insertion, and tying knots — from training with surgery video. To test results, the model was applied with a two-armed da Vinci robot passing needle through cloth in a lab.

Motion2Vec is a representation learning algorithm trained using semi-supervised learning, and it follows in the tradition of similarly named models like Word2Vec and Grasp2Vec, trained with knowledge found in an embedding space. UC Berkeley researchers previously used YouTube videos to train agents to dance, do backflips, and perform a range of acrobatics, and Google has used video to train algorithms to do things like generate realistic video or predict depth using mannequin challenge videos from YouTube.

The researchers say their work shows that video robotics used in surgery can be improved by feeding them expert demonstration videos to teach new robotic manipulation skills. “Results suggest performance improvement in segmentation over state-of-the-art baselines, while introducing pose imitation on this dataset with cm error 0:94 in position per observation respectively,” the paper reads.

[embedded content]

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

Details about Motion2Vec were published last week on preprint repository arXiv and presented at the IEEE International Conference on Robotics and Automation (ICRA). Videos of just eight human surgeons controlling da Vinci robots from the JIGSAWS data set taught the algorithm motion-centric representations of manipulation skills via imitation learning. JIGSAWS, which stands for the JHU-ISI Gesture and Skill Assessment Working Set, brings together video from Johns Hopkins University (JHU) and Intuitive Surgery, Inc (ISI).

“We use a total of 78 demonstrations from the suturing dataset,” the paper reads. “The suturing style, however, is significantly different across each surgeon.”

Other notable works from ICRA, which took place online instead of in-person in Paris, include gait optimization for lower body exoskeletons and a Stanford lab that envisions using AI to leverage public transportation to extend delivery routes for hundreds of drones.

Source: http://feedproxy.google.com/~r/venturebeat/SZYF/~3/PFCVeRou_PY/

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?