Skip to the main content.

Did you know?

 

RTI is the world’s largest DDS supplier and Connext is the most trusted software framework for critical systems.

Success-Plan-Services-DSSuccess-Plan Services

Our Professional Services and Customer Success teams bring extensive experience to train, problem-solve, mentor, and accelerate customer success.

Learn more

Developers

From downloads to Hello World, we've got you covered. Find all of the tutorials, documentation, peer conversations and inspiration you need to get started using Connext today.

Try the Connectivity Selection Tool ⇢

Resources

RTI provides a broad range of technical and high-level resources designed to assist in understanding industry applications, the RTI Connext product line and its underlying data-centric technology.

Company

RTI is the infrastructure software company for smart-world systems. The company’s RTI Connext product is the world's leading software framework for intelligent distributed systems.

Contact Us

News & Events
Cooperation

6 min read

The Cambrian Explosion of Medical Robotics

The Cambrian Explosion of Medical Robotics

About 500 million years ago, all major types of animals on earth appeared over a period of only 25 million years or so. This event, known as the Cambrian explosion, occurred because all the preconditions were right for animals to diversify. Simple organisms had many years to mature before the event. The explosion happened because multicellular structures, circulation systems, sensing, motor control and more were available for the first time.

A similar thing is going on in medical robotics today. The pioneering teleoperated robotic surgery system, Intuitive Surgical’s DaVinci, has been operating for over 20 years. It works with end effectors that look like sticks, and is used in only a few types of procedures, mostly in the urology and gynecology areas. But now, computing, sensing, motor control, data flow architecture, and more are finally capable of powering a new generation of medical robots. Operating rooms are transforming into digital surgery platforms. These systems will take on most every type of surgical procedure. This robotic explosion will change surgery more than anything in the last hundred years. 

For instance, imagine you are an orthopedic surgeon specializing in knee replacements. The preconditions are there for robotic assistance. Compared to only a few years ago, prosthetic knees are commonplace these days. In fact, it’s become so common that efficiency is critical; surgeons handle almost 1 million operations in the U.S. each year. Robotic technology, including sensing, control, data flow, and intelligence, is ready for application. Specialized robotic systems will soon change both the process and economics of these procedures.

Your manual process looks like this:

  • Prepare
    • Get X-ray or CT images of the knee
    • Design how the new joint will work. This is complex, balancing positioning, alignment, laxity (play) and more.  The biomechanics have to be right. You can use computer aided design (CAD) tools to plan exactly how it will work.
    • Choose a prosthesis: metal and plastic joint components designed to work together as a “new knee”. Most today are standard parts that come in various sizes. There’s one for the end of each bone in the joint.
    • Plan where you will cut and install the new joint.  
  • Operate
    • While the patient is anesthetized, you need to cut away the damaged bone, shape the remaining bone to fit the prosthetic, and install the prosthetic.
    • But it’s not that simple. It’s hard to tell exactly where you are cutting and how to make it fit the prosthetic. Sometimes, you can use a real-time imaging system to track your cuts, but mostly this is a trial-and-error process where you cut, temporarily install the device, test motion and more. You may have guides, alignment jigs, and test instruments. But there’s a lot of experience required.  
    • Despite all that, you can’t cut perfectly, so in the end you cement the prosthetic to close the gaps.  
    • Then you restore the soft tissues (ligaments, tendons, etc.)
  • Recovery
    • Take more images to ensure it works well. Put the patient into physical therapy.

Of all these steps, making accurate cuts is by far the hardest and most important. Setting up the table and patient to ensure cut alignment is the most expensive part. It can take 2-3 hours to correctly position all the things required for the surgery. Much of this has to be done with the patient on the table. Together, setup and cut planning and evaluation drive much of the cost and risk. Still, in the end, humans aren’t good at precision angle cuts. Even trained surgeons can’t make perfect cuts. And adapting the variety of human bones to standard parts means most patients don’t get an optimal result.

Robotics can help this. Robots can follow extremely precise cut paths generated by the imaging and CAD plans. The resulting precision angles and ultra-smooth surfaces are so clean that many operations don’t require cement; the part is press fit and the bone grows into the part like your natural joint. With robotic accuracy and flexibility, custom joints become realistic to install. Instead of adapting the patient to a standard part, a custom part can be 3D printed from the imaging information. Manually adapting procedures to install these unique parts is impractical for humans. It’s entirely possible for a robot.

Of course, it’s not quite that simple. Importantly, the robot has to know exactly where the bone is, a process called “registration.” That’s done with an imaging system that can track something that looks like an antenna with reflective targets on it. This is attached (screwed into) the bone, and then another probe with targets on it is used to locate bone points exactly. After that, the vision system knows exactly where the bone is while it’s cutting.

Then, to make it all work, the right data has to get to the right place at the right time. The vision system communicates where the bone is, even if it’s moving. The surgeon controls when and how fast to cut. The robot uses those to execute a perfect cut along the pre-planned angle. This is all automated so it can even be done remotely, even with the surgeon thousands of miles away. It’s a bold new intelligent world.

The knee-replacement robot essentially copies how surgeons work, but with better accuracy and precision. Increasing evidence shows that the results of robotic joint replacement are better. The robotic surgery in these studies improved implant positioning, alignment, and ligament balance.  

General-purpose robots can extend human capabilities in other ways. Robotic technology can enhance a surgeon’s environment with high-resolution 3D monitors, four robotic arms, and automatically-changeable tools.  Connecting those through intelligent computing lets the surgeon measure anatomical structures to millimeter precision, generate “tags” for landmarks and training, and even monitor potential accidental injuries. The robot + surgeon system improves operating-room capability, communication, and outcomes.  

Some robots can perform operations that a human cannot do. For instance, consider the problem of a lung biopsy. The current way to do a biopsy is to take a long needle, poke it through the chest into the lung and suck out a sample for analysis.  But this has all sorts of problems. The needle goes through the skin into the lung, so bacteria can get in, risking infection. You have to puncture the lung, risking a pneumothorax (collapsed lung).  It’s very hard to know where the needle actually is, so you need to run continuous X-rays (fluoroscope)  and “poke around” a bit to find the right spot. It can work, but it’s messy, slow, risky, and expensive.  

Now imagine instead that you have a robot that looks like a long thin tube about the thickness of a USB cable. It’s steerable with a simple controller. You can see through fiber optic video cable. When you get to the suspected tumor, a quick suction tube takes a sample. This tube robot makes no punctures, risks no infection or lung damage, and makes the procedure safer, faster, and more accurate. No human can match that.

These are only a few of the hundreds of new-generation medical robots. The variety is incredible. A laparoscope robot folds up to go through a hole the size of a dime, then unfolds into a praying-mantis-like device with eyes and hands. Another looks like two very dextrous snakes with gripper heads. Jointed arms, tiny grippers on sticks, human-like dual-armed systems, under-the-skin grippers, and more will soon populate operating rooms. Fletcher Spaght Inc. (FSI), an analyst in the industry, tracks over 200 robotic-assisted surgery products in various stages of development. Each is designed to perform precision motions in ways that humans cannot match. Medical robots will soon change nearly every procedure in the operating room. 

Equally important, many of these systems will soon leverage AI. Applications include before (pre-op), during (intra-op), and after (post-op) the surgery. Pre-op, AI can help surgeons train for the specific challenge, model the patient, and develop custom process and implants. Closer to the surgery, AI can speed setup, making sure the patient and equipment are properly placed and calibrated. It can also help align images such as CT scans to the actual patient, ensuring accurate operation.

During the procedure, AI can increase accuracy, enforce safety, and improve efficiency. For instance, smart robots can verify measurements, guide the robotic motion, and flag potential unsafe steps. Smart tools can automate some of the routine-but-slow steps like staple placement and suturing. Increasingly, the robot can also use AI to coordinate the operation with other devices in the operating theater like patient monitoring and anesthesiology. 

After the operation, AIs will improve both operations and patient outcomes. Automated systems can more closely monitor patients, analyze operational effectiveness, and even predict outcomes for early intervention.  

In summary, surgical systems of tomorrow will teem with hundreds of types of robots. They won’t be just mechanical motion replicators, they will use AI to assist care teams in performing more accurate, faster, less invasive operations. They will fill every niche of surgery, adapting exactly to each procedure’s challenges and needs. Every operation can be customized to fit each patient’s unique condition and needs. The environment is incredibly fertile: Imaging, compute power, sensing, intelligence, software architecture, and mechanical design are all at inflection points, forming a truly unique junction.  

These new applications are enabled by the availability of data, or more accurately, by data flow. An approach to real-world software architecture called “data centricity” delivers the right data to the right place at the right time. Data centricity makes it easy to feed sensor data to intelligent algorithms and from there to the motors that perform the actions. Data is the key to intelligence in all types of AI. But data flow is the key to that intelligence in the real world of sensors, motors, robots, instruments, and people. Thus, data flow is the key to the future of patient care. 

The moment is right for medical robotics to expand in many directions. Advanced computing, intelligence, imaging, mechanical design, advanced motors, sensing, and software architecture are all combining into fertile ground for innovation. These systems are smarter and better. And they are almost all the first of their kind.

The Robotic Cambrian Explosion is here.

This blog is courtesy of RTI, the largest provider of infrastructure software for smart-world systems. We provide the distributed data flow capabilities that connect sensors, motors, people, and intelligent algorithms together. It’s an inspiring time to be a key driver of the smart world.

 

 

About the author:

rti-event-headshot-stan-schneiderStan Schneider is CEO of Real-Time Innovations (RTI), the largest software framework provider for smart machine and real-world systems. 

Stan also serves on the advisory board for IoT Solutions World Congress and the boards of the Teleoperations Consortium and the Autonomous Vehicle Computing Consortium (AVCC). Stan holds a PhD in EE/CS from Stanford University.