MICCAI 2017 Endoscopic Vision Challenge

During September 2017, Intuitive Surgical Inc. conducted four different challenges during the Medical Image Computing and Computer-Assisted Interventions Conference 2017 (MICCAI 2017) held in Ontario, Canada. My labmate Yun-Hsuan Su and I participated in the Robotic Instrument Segmentation Sub-Challenge, a part of the larger Endoscopic Vision Challenge.

We were placed 8th amongst all participating teams. We approached the problem using traditional computer vision, by using features such as color (Oppo1, Oppo2, Hue, and Saturation), GrabCut, level of blur, edges, and depth. Tapping into prior knowledge regarding the presence of instruments, we introduced an additional border constraint on contours to eliminate noise. Our segmentation resulted in an average Dice Coefficient of 0.716 on the test dataset. A sample output of the process is as shown below:

Data1Image056

Download Presentation from MICCAI2017

After exploring further, we are currently working on using Deep Learning, combined with traditional computer vision, to allow improvement of performance on a limited dataset.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s