EP 3 470 006 B1

Automated Segmentation of Three Dimensional Bony Structure Images

A computer-implemented system: at least one processor communicably coupled to at least one nontransitory processor-readable storage medium storing processor-executable instructions or data receives segmentation learning data comprising a plurality of batches of labeled anatomical image sets, each image set comprising image data representative of a series of slices of a three-dimensional bony structure, and each image set including at least one label which identifies the region of a particular part of the bony structure depicted in each image of the image set, wherein the label indicates one of a plurality of classes indicating parts of the bone anatomy; trains a segmentation CNN, that is a fully convolutional neural network model with layer skip connections, to segment semantically at least one part of the bony structure utilizing the received segmentation learning data; and stores the trained segmentation CNN in at least one nontransitory processor-readable storage medium of the machine learning system.

US 10,646,285 B2

Graphical User Interface for a Surgical Navigation System and Method for Providing an Augmented Reality Image During Operation

Surgical navigation system: 3D display system with see-through visor; a tracking system for real-time tracking of: surgeon’s head, see-through visor, patient anatomy and surgical instrument to provide current position and orientation data; a source of an operative plan, a patient anatomy data and a virtual surgical instrument model; a surgical navigation image generator to generate a surgical navigation image with a three-dimensional image representing simultaneously a virtual image of the surgical instrument corresponding to the current position and orientation of the surgical instrument and a virtual image of the surgical instrument, the see-through visor, the patient anatomy and the surgical instrument; the 3D display system configured to show the surgical navigation image at the see-through visor, such that an augmented reality image collocated with the patient anatomy in the surgical field underneath the see-through visor is visible to a viewer looking from above the see-through visor towards the surgical field.

US 2020 / 0151507 A1

Autonomous Segmentation of Three-Dimensional Nervous System Structures from Medical Images

A method for autonomous segmentation of three-dimensional nervous system structures from raw medical images, the method including: receiving a 3D scan volume with a set of medical scan images of a region of the anatomy; autonomously processing the set of medical scan images to perform segmentation of a bony structure of the anatomy to obtain bony structure segmentation data; autonomously processing a subsection of the 3D scan volume as a 3D region of interest by combining the raw medical scan images and the bony structure segmentation data, wherein the 3D ROI contains a subvolume of the bony structure with a portion of surrounding tissues, including the nervous system structure; autonomously processing the ROI to determine the 3D shape, location, and size of the nervous system structures by means of a pre-trained convolutional neural network (CNN).

US 2020 / 0051274 A1

Computer Assisted Identification of Appropriate Anatomical Structure for Medical Device Placement During a Surgical Procedure

A method for computer assisted identification of appropriate anatomical structure for placement of a medical device, comprising: receiving a 3D scan volume comprising set of medical scan images of a region of an anatomical structure where the medical device is to be placed; automatically processing the set of medical scan images to perform automatic segmentation of the anatomical structure; automatically determining a subsection of the 3D scan volume as a 3D ROI by combining the raw medical scan images and the obtained segmentation data; automatically processing the ROI to determine the preferred 3D position and orientation of the medical device to be placed with respect to the anatomical structure by identifying landmarks within the anatomical structure with a pre-trained prediction neural network; automatically determining the preferred 3D position and orientation of the medical device to be placed with respect to the 3D scan volume of the anatomical structure.

US 2019 / 0201106 A1

Identification and Tracking of a Predefined Object in a Set of Images from a Medical Image Scanner During a Surgical Procedure

A computer-implemented system with at least one processor that reads a set of 2D slices of an intraoperative 3D volume, each of the 2D slices comprising an image of an anatomical structure and of a registration grid containing an array of markers; detects the markers of the registration grid on each of the 2D slices by using a marker detection convolutional neural network ( CNN ) ; filters the marker detection results for the 2D slices to remove false positives by processing the whole set of the 2D slices of the intraoperative 3D volume; and determines the 3D location and 3D orientation of the registration grid with respect to the intraoperative 3D volume, by finding a homogeneous transformation between the filtered marker detection results for the intraoperative 3D volume and a reference 3D volume of the registration grid.

US 2019 / 0192230 A1

Method for Patient Registration, Calibration, and Real - Time Augmented Reality Image Display During Surgery

Method for registering patient anatomical data in surgical navigation system: placing a registration grid over the patient at a first position, the grid having a plurality of fiducial markers; using a medical scanner, scanning both a patient anatomy of interest and the registration grid to obtain patient anatomical data; providing a pre-attached tracking array having a plurality of fiducial markers pre-attached to the patient at a second position; using a fiducial marker tracker, capturing the 3D position and/or orientation of the pre-attached tracking array and the registration grid; and registering the patient anatomical data with respect to the 3D position and/or orientation of the pre-attached tracking array as a function of the relative position and/or orientation of the registration grid and the pre-attached tracking array.

US 2019 / 0175285 A1

Graphical User Interface for Use in a Surgical Navigation System with a Robot Arm

A surgical navigation system includes: a tracker for real-time tracking of a position and orientation of a robot arm; a source of a patient anatomical data and a robot arm virtual image; a surgical navigation image generator generating a surgical navigation image including the patient anatomy and the robot arm virtual image in accordance to the current position and/or orientation data provided by the tracker; a 3D display system showing the surgical navigation image.

US 2019 / 0142519 A1

Graphical User Interface for Displaying Automatically Segmented Individual Parts of Anatomy in a Surgical Navigation System

A surgical navigation system includes a source of a patient anatomy data, wherein the patient anatomy data comprises a three-dimensional reconstruction of a segmented model comprising at least two sections representing parts of the anatomy. A surgical navigation image generator is configured to generate a surgical navigation image comprising the patient anatomy. A 3D display system is configured to show the surgical navigation image wherein the display of the patient anatomy is selectively configurable such that at least one section of the anatomy is displayed and at least one other section of the anatomy is not displayed.

US 2019 / 0053851 A1

Surgical Navigation System and Method for Providing an Augmented Reality Image During Operation

A surgical navigation system includes: a 3D display with a see-through mirror that is partially transparent and partially reflective; a tracking system comprising means for real-time tracking of: a surgeon’s head, the see-through mirror and a patient anatomy to provide current position and orientation data; a source of a patient anatomy data; a surgical navigation image for generating a surgical navigation image comprising at least the patient anatomy data, in accordance with the current position and orientation data provided by the tracking system based on the current relative position and orientation of the surgeon’s head, the see-through mirror and the patient anatomy; the 3D display configured to emit the surgical navigation image towards the see-through mirror, such that an augmented reality image collocated with the patient anatomy in the surgical field underneath the see-through mirror is visible to a viewer looking from above the see-through mirror towards the surgical field.

This technology has not been cleared or approved by the FDA and is not for sale in the United States. Copyright 2019 Holo Surgical Inc. All Rights Reserved.