Loading...

Submissions

Change Log

  • 04/02/2018 - Provided links for Algorithm Submission and Dry Run
  • 04/02/2018 - Updates to FAQ page
  • 03/27/2018 - Updated DevKit (fixed typo in ground truth files of the ecvaluation code)
  • 03/24/2018 - Uploaded evaluation code for the 2nd Challenge to Github (previously it was only available through the Docker container for the Dry Run)
  • 03/24/2018 - Updates to cropping code provided with the Dataset
  • 03/22/2018 - Updates to allowed algorithm inputs and expected outputs: FAQ page
  • 03/21/2018 - Updates to Development Kit inputs
  • 03/19/2018 - New FAQ page
  • 03/17/2018 - Updates to Development Kit readme
  • 03/17/2018 - Pushed update to dockerhub's evaluation container (fixed bug in Inception preprocessing step)
  • 03/16/2018 - Pushed update to cropping module in dataset folder (different image types now accepted as input frames)
  • 03/13/2018 - Provided cropping code to extract annotated objects using the VATIC annotations format
  • 03/13/2018 - Fixed discrepancies in the annotation labels (we provide a spreadsheet that provides the equivalencies of the old labels to their updated versions)

The Development Kit consists of a Docker file containing the basic structure for the algorithms submission, as well as instructions on how to pull and run a supplemental quantitative classification module for the second challenge. The images, annotation, and lists specifying the training/validation sets for the challenge are provided separately, and can be obtained via: Google Drive.

Each team must submit one algorithm for each challenge they wish to participate in. Participants who have investigated several algorithms may submit up to 3 algorithms per challenge. All submissions (per challenge) will be held within one single Docker container to be uploaded to Docker Hub. The Docker container will container all dependencies and code required to perform the model’s operation and will execute the model(s) contained upon run.

The input images will be provided to the container at run time through Docker’s mounting option, as will the output folders for the model(s) to save their results. Each model must be run on all images contained within the input folder and must save the new images to their respective output folder locations, without any name changes or missing images.

Requirements

Software: Hardware.

The proposed algorithms should be able to run in systems with:

  • Up to and including Titan Xp 12 GB
  • Up to and including 12 cores
  • Up to and including 32gb memory

If you have any questions please feel free to email ug2challenge@gmail.com

About the Challenge

Support for this challenge workshop is provided under IARPA contract #2016-16070500002. This workshop is supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA). The views and conclusions contained herein are those of the organizers and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.