Cephalometric Landmark Detection in Lateral X-ray Images 2023


We invite you to participate in the CL-Detection 2023 cephalometric landmark detection challenge, which is held with 2023 MICCAI conference.  Prof Wang is also hosting another challenge in MICCAI 2023. If you are seeking more publication opportunities, feel free to check the challenge website (Automated prediction of treatment effectiveness in ovarian cancer using histopathological images)

Aim:

CL-Detection 2023 aims to provide a comprehensive benchmark for cephalometric landmark detection methods.

Motivation: 

  • Cephalometric analysis is a fundamental examination which is routinely used in fields of orthodontics and orthognathics
  • The key operation during the analysis is marking craniofacial landmarks from lateral cephalograms which provide diagnosis information of the craniofacial condition of a patient and affect treatment planning decision.
  • Due to the X-ray imaging quality of the skull and the individual variations of anatomical types, it is not easy to reliably locate the landmarks in lateral cephalograms with high precision.
  • Reliable landmark annotations often require experienced doctors, and even for seasoned orthodontists, manually identifying these landmarks can be a time-consuming process.

Scope: 

  • Localize the cephalometric landmarks in lateral X-ray images.
  • You will be provided with most diverse cephalometric landmark detection dataset by extending the existing benchmark datasets with more landmark annotations includes 600 X-ray images from 3 medical centers with 38 craniofacial landmarks for all cases.
  • Evaluation measures include: 1) the Mean Radial Error (MRE) between the prediction and the ground truth; 2) the Success Detection Rate (SDR) for 2.0 mm. All metrics will be used to compute the ranking.

Notice

Every participant should get ID verification for their Grand Challenge account. Unverified users will be not available to submit algorithms according to the Grand Challenge policy.

How to participate:

  1. Submit User Agreement Form to two email addresses, Prof Wang (cweiwang@mail.ntust.edu.tw) and Mr. Hikam Muzakky (m11123801@mail.ntust.edu.tw)
  2. Sign up for a Grand Challenge account.
  3. Join the  CL-Detection 2023 Challenge. (Please submit the User Agreement Form first.)
  4. Download the datasets from a link on the confirmed email with access credentials.
  5. Submit your result to the online submission system.

If you have any questions, please contact us cweiwang@mail.ntust.edu.tw, cwwang1979@gmail.com, m11123801@mail.ntust.edu.tw

Important Dates:

  • Open for registration: April 1, 2023
  • Training data release: May 1, 2023
  • Validation starts (on grand-challenge platform): June 1, 2023 (00.00 TWT)
  • Deadline for registration form submission: August 1, 2023 (00.00 TWT)
  • Deadline for validation submission: August 1, 2023 (00.00 TWT)
  • Testing starts (on grand-challenge platform): August 1, 2023 (00.00 TWT)
  • Deadline for testing submission: August 16, 2023 (00.00 TWT)
  • Deadline for paper submission* in the LNCS Springer format (with 4-8 pages) : September 15, 2023
  • Challenge workshop: October 8, 2023
*Each participating team who presented their method at the challenge session is allowed two co-authorships.

Algorithm Reference Docker Container:

  • We provide the RetinaNet (Lin et al. 2017) baseline framework for cephalometric landmark detection on CL-detection 2023 challenge which has been trained for 100 epochs. RetinaNet is a one-stage object detection model that utilizes a focal loss function to address class imbalance during training. Focal loss applies a modulating term to the cross entropy loss in order to focus learning on hard negative examples. Furthermore, our baseline method utilized MMDetection library which is an open source object detection toolbox based on PyTorch. It is a part of the OpenMMLab project. All codes related to our first baseline method, RetinaNet with ResNet101 as baseline, can be found on [github].
  • We also provide the Unet baseline framework. All codes related to our second baseline method, Unet  can be found on following link [github]. The solution is built solely on the PyTorch framework without any additional framework dependencies (e.g., MMdetection).
  • Moreover, We offer an exciting baseline model based on the MMPose framework! This model allows for seamless switching between different landmark detection modules, providing you with great flexibility for your experiments. You can access the code for this baseline model on the following Github link [github].

To ensure fair comparison, the data used to train algorithms is restricted to the data provided by THIS challenge. Pre-trained models from IEEE ISBI 2015. 2014 challenges or PKU cephalogram dataset are also not allowed to be used in the challenge.


Prizes and Publication:

  • The top 3 performing methods will be awarded with a certificate and 500 euros. 
  • A challenge paper will be written with the organizing team’s members in submission to journals . Top performing teams will be invited to contribute to a challenge overview paper, which will be submitted to a high impact journal (MedIA/TMI). 
Prof Wang is also hosting another challenge in MICCAI 2023. If you are seeking more publication opportunities, feel free to check the challenge website (Automated prediction of treatment effectiveness in ovarian cancer using histopathological images)

Organizers: 

Technical Group:
  • Prof. Ching-Wei Wang, Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taiwan.
  • Prof. Bingsheng Huang, Medical AI Lab, Health Science Center, School of Biomedical Engineering, Shenzhen University, China.
  • Mr. Hongyuan Zhang, Medical AI Lab, Health Science Center, School of Biomedical Engineering, Shenzhen University, China.
  • Mr. Hikam Muzakky, Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taiwan.

Medical Group:

  • Prof. Jun Cao, Department of Stomatology, Shenzhen University General Hospital, Shenzhen University, China.
  • Dr. Juan Dai, Department of Stomatology, Shenzhen University General Hospital, Shenzhen University, China.
  • Dr. Xuguang Li, Department of Stomatology, Shenzhen University General Hospital, Shenzhen University, China.


Related Publications and Challenges (IEEE ISBI 2015, IEEE ISBI 2014 Challenge).

1. Wang C* et al. (2016) A benchmark for comparison of dental radiography analysis algorithms, Medical Image Analysis 31, 63-76 (IF=13.828, 2/113 COMPUTER SCIENCE, INTER. APPLICATIONS)

2. Wang* et al.(2015) Evaluation and Comparison of Anatomical Landmark Detection Methods for Cephalometric X-Ray Images: A Grand Challenge, IEEE Transactions on Medical Imaging 34(9) 1-11 (IF=11.037, 5/136 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING) 

3. Lindner C., Wang C* et al. (2016) Fully Automatic System for Accurate Localisation and Analysis of Cephalometric Landmarks in Lateral Cephalograms, Scientific Reports 6: 33581 (IF=4.996, 19/134 MULTIDISCIPLINARY SCIENCES)


Supported by: