Is the data of this competition exactly the same as in 2019?
Yes it is.Posted by: bbearce @ July 21, 2020, 4:52 p.m.
Thanks, thers are same questions：
1、Is the test set the same with 2019?
2、Can we use open-data?
3、Does the final Docker have requirements for running time?
It is ok for the participants to use additional datasets as long as they are willing to share it with the research community. Also a limit of 1.5-2 days is reasonable and will be the default setting for evaluations.Posted by: bbearce @ July 22, 2020, 1:55 p.m.
Also, yes the same test set.Posted by: bbearce @ July 22, 2020, 1:56 p.m.
ok, thanksPosted by: tabulo @ July 23, 2020, 1:17 a.m.
Isn’t this unfair? Last year, the participating teams could get the test set for targeted optimization,How do we avoid this?Posted by: tabulo @ July 23, 2020, 2:28 a.m.
I think so that it is unfair. Last year, the participating teams could get the test set for targeted optimization,even they spend more time optimizing and also know the results.I think the sponsor should publish the test set now and only submit it once at the endPosted by: Sen @ July 24, 2020, 2:57 a.m.
In this way, the test set is meaningless. The doctor has enough time to obtain the real labels, and the model can be overfitted on the test set.
The best way is to update the test set.
The test dataset is not available to participants last year, only the docker program test on the unseen test dataset and the participants get the score only.Posted by: jiafc @ July 24, 2020, 8:02 a.m.
The test score is display only once.Posted by: jiafc @ July 24, 2020, 8:03 a.m.
This is last year Introduction:
Test Data availability & Performance Evaluation (September 3-18). The test data are made available to each participating team that submitted a short paper, for a limited controlled time-window (48h) during 3 and 18 of September. The participants will analyze the images using their local computing infrastructure and will have to submit their classification results 48h later to the online evaluation portal.
This is 2020 Introduction:
A test dataset of paired radiology-pathology images will made available to each participating team that submitted a short paper. The participants will analyze the images using their local computing infrastructure and will have to submit their classification results to the online evaluation portal. Each participating team will be allowed to submit their results to the platform only once.
My understanding is that it is the same as the validation set, First we can get the test data, and analyze the images on our own computer. Finally the top ranked teams submit docker.Posted by: tabulo @ July 24, 2020, 9:04 a.m.
The difference with the validation set is that we can only submit once.Posted by: tabulo @ July 24, 2020, 9:07 a.m.
In this way,last year ,the participants have been given the test dataPosted by: Sen @ July 24, 2020, 9:13 a.m.
There is going to be an extension generally speaking. We are finalizing the details and will make edits to the website under "Terms and Conditions" and under "Participate/Get Data" for the test phase. We will address the confusion surrounding the Test data set as well.Posted by: bbearce @ July 27, 2020, 3:20 p.m.