Frequently Asked Questions CL-detection 2023.


Q:  How long can the participation request be approved after sending the signed registration form to to Prof Wang (cweiwang@mail.ntust.edu.tw; cwwang1979@gmail.com) and Mr. Hikam Muzakky (m11123801@mail.ntust.edu.tw)?

A:  The request will be processed within 1-2 working days if the registration form document is filled out correctly.


Q: How many people can form a team?

A: We do not have a limitation on the number of team members. However, please be reminded that we will invite top 10 teams to the challenge paper, which will be sent to MIA or IEEE TMI, and two members from each team will be listed as coauthors. Each team will need to choose the two members for the challenge paper.


Q: My team has multiple team members. Do they all need to join the grand-challenge?

A: No. We just require the one representative to join the grand-challenge. Team leader can share the competition data with the team members. Please note that all the team members should sign their names on the registration form. 


Q: Can we use other datasets or pre-trained models to develop the landmark detection algorithms?

A: To ensure fair comparison, the data used to train algorithms is restricted to the data provided by THIS challenge. Pre-trained models from IEEE ISBI 2015. 2014 challenges or PKU cephalogram dataset are also not allowed to be used in the challenge.

In addition, please be reminded that you have to describe and explain your training strategy and your landmark detection algorithm by sending  a first draft of your paper submission* in the LNCS Springer format (with 4-8 pages) along with your algorithm container submission for testing phase 

Q: 1. Are there any limitations on the computational resources (e.g., GPU memory) for the training phase? 2. If I submit a Docker Image, it is my understanding that it will be used for inference in a cloud environment. What is the maximum GPU capacity that I can use?

A: 1. No. there is no  limitations on the computational resources. 2. According to the supporting officer of the grand challenge platform, "we run the algorithms with either 16GB or 32 GB memory, depending on what the user specifies in their algorithm setttings. The AWS instance types we use are then either ml.g4dn.xlarge or ml.g4dn.2xlarge. "