0 minutes to go
  • 149 Participants
  • 2269 Submissions
  • 33500 Prize
  • Competition Ends: June 9, 2020, 3:59 p.m.
  • Server Time: 9:20 p.m. UTC

Overview

KDD 2020 will be held in San Diego, CA, USA from August 23 to 27, 2020. The Automatic Graph Representation Learning challenge (AutoGraph), the first ever AutoML challenge applied to Graph-structured data, is the AutoML track challenge in KDD Cup 2020 provided by 4Paradigm, ChaLearn, Stanford and Google.

Machine learning on graph-structured data. Graph-structured data have been ubiquitous in real-world, such as social networks, scholar networks, knowledge graph etc. Graph representation learning has been a very hot topic, and the goal is to learn low-dimensional representation of each node in the graph, which are used for downstream tasks, such as friend recommendation in a social network, or classifying academic papers into different subjects in a citation network. Traditionally, heuristics are exploited to extract features for each node from the graph, e.g., the degree statistics, or random walk based similarities. However, in recent years, sophisticated models such as graph neural networks (GNN) have been proposed for the graph representation learning tasks, which lead to the state-of-the-art results in many tasks, such as node classification, or link prediction.

Challenges in developing versatile models. Nevertheless, no matter the traditional heuristic methods or recent GNN based methods, huge computational and expertise resources are needed to be invested to achieve a satisfying performance given a task. For example, in DeepWalk and node2vec, two well-known random walk based methods, various hyper-parameters like the length and number of walks per node, the window size, have to be fine-tuned to obtain better performance. And when using the GNN models, e.g. GraphSAGE or GAT, we have to spend quite a lot of time to choose the optimal aggregation function in GraphSAGE, or head numbers of self-attention in GAT. Therefore, it limits the application of the existing graph representation models due to the huge demand of human experts in fine-tuning process.

Autograph Challenge. AutoML/AutoDL (https://autodl.chalearn.org) is a promising approach to lower the manpower costs of machine learning applications, and has achieved encouraging successes in hyper-parameter tuning, model selection, neural architecture search, and feature engineering. In order to enable more people and organizations to fully exploit their graph-structured data, we organize AutoGraph challenge dedicated to such data. 

In this challenge, participants should design a computer program capable of providing solutions to graph representation learning problems autonomously (without any human intervention).  Compared to previous AutoML competitions we organized, our new focus is on Graph-structured Data, where nodes with features and edges (connections among nodes) are available. 

To prevail in the proposed challenge, participants should propose automatic solutions that can effectively and efficiently learn high-quality representation for each node based on the given features, neighborhood and structural information underlying the graph. The solutions should be designed to automatically extract and utilize any useful signals in the graph no matter by heuristic or systematic models.
 
Here, we list some specific questions that the participants should consider and answer:
  • How to automatically design heuristics to extract features for a node in graph?
  • How to automatically exploit the neighborhood information in a graph ?
  • How to automatically tune an optimal set of hyper-parameters for random walk based graph embedding methods ?
  • How to automatically choose the aggregation function when using the GNN-based models?
  • How to automatically design an optimal GNN architecture given different datasets?
  • How to automatically and efficiently select appropriate hyper-parameters for different models?
  • How to make the solution more generic, i.e., how to make it applicable for unseen tasks?
  • How to keep the computational and memory cost acceptable?

Tentative Timeline 

  • March 25th, 2020: Beginning of Feedback Phase, release of public datasets. Participants can start submitting codes and obtaining immediate feedback in the leaderboard.
  • May 25th, 2020: End of Feedback Phase
  • May 26th, 2020: Beginning of Check Phase
  • June 1st, 2020: End of Check Phase, Organizer notifying results of Check Phase
  • June 2nd, 2020: Beginning of Final Phase
  • June 4th, 2020: Deadline for re-submitting to Final Phase
  • June 5th, 2020: Deadline for submitting the fact sheets
  • June 7th, 2020: End of Final Phase, beginning of post competition process
  • June 9th, 2020: Announcement of the KDD Cup 2020 Winners
  • Auguest 22nd, 2020: Beginning of KDD 2020

Prizes

1st Prize: 15000 USD

2nd Prize: 10000 USD

3rd Prize: 5000 USD

4th - 10th prize: 500 USD each

 

About

Please contact the organizers if you have any problem concerning this challenge.

 

Advisors

- Wei-Wei Tu, 4Pardigm Inc., China and ChaLearn, USA

- Jure Leskovec, Stanford University, USA

- Hugo Jair Escalante, IANOE, Mexico and ChaLearn, USA

- Isabelle Guyon, Université Paris-Saclay, France, ChaLearn, USA

- Qiang Yang, Hong Kong University of Science and Technology, Hong Kong, China

 

Committee (alphabetical order)

- Xiawei Guo, 4Paradigm Inc., China

- Shouxiang Liu, 4Paradigm Inc., China

- Zhen Xu, 4Paradigm Inc., China

- Rex Ying, Stanford University, USA

- Huan Zhao, 4Paradigm Inc., China

  

Organizing Institutes

4paradigmchaleanstanfordGoogle

About AutoML 

Previous AutoML Challenges:

First AutoML Challenge

AutoML@PAKDD2018

AutoML@NeurIPS2018

- AutoML@PAKDD2019

- AutoML@KDDCUP2019

AutoCV@IJCNN2019

AutoCV2@ECML PKDD2019

AutoNLP@WAIC2019

AutoWSL@ACML2019

- AutoDL@NeurIPS2019

AutoSpeech@ACML2019

AutoSeries@WSDM2020

 

About 4Paradigm Inc.

Founded in early 2015, 4Paradigm is one of the world’s leading AI technology and service providers for industrial applications. 4Paradigm’s flagship product – the AI Prophet – is an AI development platform that enables enterprises to effortlessly build their own AI applications, and thereby significantly increase their operation’s efficiency. Using the AI Prophet, a company can develop a data-driven “AI Core System”, which could be largely regarded as a second core system next to the traditional transaction-oriented Core Banking System (IBM Mainframe) often found in banks. Beyond this, 4Paradigm has also successfully developed more than 100 AI solutions for use in various settings such as finance, telecommunication and internet applications. These solutions include, but are not limited to, smart pricing, real-time anti-fraud systems, precision marketing, personalized recommendation and more. And while it is clear that 4Paradigm can completely set up a new paradigm that an organization uses its data, its scope of services does not stop there. 4Paradigm uses state-of-the-art machine learning technologies and practical experiences to bring together a team of experts ranging from scientists to architects. This team has successfully built China’s largest machine learning system and the world’s first commercial deep learning system. However, 4Paradigm’s success does not stop there. With its core team pioneering the research of “Transfer Learning,” 4Paradigm takes the lead in this area, and as a result, has drawn great attention of worldwide tech giants. 

About ChaLearn

ChaLearn is a non-profit organization with vast experience in the organization of academic challenges. ChaLearn is interested in all aspects of challenge organization, including data gathering procedures, evaluation protocols, novel challenge scenarios (e.g., competitions), training for challenge organizers, challenge analytics, result dissemination and, ultimately, advancing the state-of-the-art through challenges.

 

Quick start

Baseline, Starting kit, Public datasets can be downloaded here:

 

Baseline

This is a challenge with code submission. We provide one baseline above for test purposes.

To make a test submission, download the starting kit and follow the readme.md file instruction. click on the blue button "Upload a Submission" in the upper right corner of the page and re-upload it. You must click first the orange tab "Feedback Phase" if you want to make a submission simultaneously on all datasets and get ranked in the challenge. You may also submit on a single dataset at a time (for debug purposes). To check progress on your submissions goes to the "My Submissions" tab. Your best submission is shown on the leaderboard visible under the "Results" tab.

Starting kit

The starting kit contains everything you need to create your own code submission (just by modifying the file model.py) and to test it on your local computer, with the same handling programs and Docker image as those of the Codalab platform (but the hardware environment is in general different). 

The starting kit contains sample data. Besides that, 5 public datasets are also provided so that you can develop your solutions offline. These 5 public datasets can be downloaded from the link at the beginning.

Note that the version of cuda in this docker is 10, if the cuda version on your own PCs is less than 10, it may cause you to be unable to use the GPU in this docker.

Local development and testing:

You can test your code in the exact same environment as the Codalab environment using docker. You are able to run the ingestion program (to produce predictions) and the scoring program (to evaluate your predictions) on toy sample data.

1. If you are new to docker, install docker (version > 19) from https://docs.docker.com/get-started/.

2. At the shell, change to the starting-kit directory, run

  docker run --gpus=0 -it --rm -v "$(pwd):/app/autograph" -w /app/autograph nehzux/kddcup2020:v2

3. Now you are in the bash of the docker container, run the local test program

  python run_local_test.py --dataset_dir=path_to_dataset --code_dir=path_to_model_file

It runs ingestion and scoring program simultaneously, and the predictions and scoring results are in sample_result_submissions and scoring_output directory. 

Submission

Interface

The interface is simple and generic: you must supply a Python model.py, where a Model class is defined following the API defined in "Evaluation" page.

To make submissions, zip model.py and its dependency files (without the directory), then use the "Upload a Submission" button. Please note that you must click first the orange tab "Feedback Phase" if you want to make a submission simultaneously on all datasets and get ranked in the challenge. You may also submit on a single dataset at a time (for debug purposes). Besides that, the ranking in the public leaderboard is determined by the LAST code submission of the participants.

Please note that for this challenge, "Detailed Results" button on the submission page is not used and provides no information.

Computational limitations

  • A submission on one dataset is limited to certain budgets associated to each dataseet
  • Participants are limited to 3 submissions per day per dataset. However, we do not encourage to make a submission on a single dataset at a time, and the submission on all datasets will be failed if the number of submissions of one dataset is exhausted.
  • The "Execution Time Used" and "Execution Time Left" on the submission page are of no importance and are set to be an extremely large number. We will only limit time budget per dataset as provided in the meta information.

Running Environment

In the starting- kit, we provide a docker that simulates the running environment of our challenge platform. Participants can check the python version and installed python packages with the following commands:

 python --version

 pip list

For other packages/libs that are not installed in the docker, participants can install them outside of method "train_predict", e.g. using os.system("pip install xxx") at the begining of model.py. Please note that the time cost of installing the libraries is not counted as the time budget.

On our platform, for each submission, the allocated computational resources are:

  • CPU: 4 Cores
  • GPU: a NVIDIA Tesla P100 GPU (running CUDA 10 with drivers cuDNN 7.5) 
  • Memory: 30 GB
  • Disk: 200 GB

Three graph related libraries are installed:

 

Challenge Rules

  • General Terms: This challenge is governed by the General ChaLearn Contest Rule Terms, the Codalab Terms and Conditions, and the specific rules set forth.
  • Announcements: To receive announcements and be informed of any change in rules, the participants must provide a valid email.
  • Conditions of participation: Participation requires complying with the rules of the challenge. Prize eligibility is restricted by Chinese government export regulations, see the General ChaLearn Contest Rule Terms. The organizers, sponsors, their students, close family members (parents, sibling, spouse or children) and household members, as well as any person having had access to the truth values or to any information about the data or the challenge design giving him (or her) an unfair advantage, are excluded from participation. A disqualified person may submit one or several entries in the challenge and request to have them evaluated, provided that they notify the organizers of their conflict of interest. If a disqualified person submits an entry, this entry will not be part of the final ranking and does not qualify for prizes. The participants should be aware that ChaLearn and the organizers reserve the right to evaluate for scientific purposes any entry made in the challenge, whether or not it qualifies for prizes.
  • Dissemination: The challenge is orgnized in conjunction with the KDD 2020 conference.
  • Registration: The participants must register to Codalab and provide a valid email address. Teams must register only once and provide a group email, which is forwarded to all team members. Teams or solo participants registering multiple times to gain an advantage in the competition may be disqualified. One participant can only be registered in one team. Note that you can join the challenge until one week before the end of feedback phase. Real personal identifications will be required (notified by organizers) at the end of feedback phase to avoid deplicate accounts and award claim.
  • Anonymity: The participants who do not present their results at the challenge session can elect to remain anonymous by using a pseudonym. Their results will be published on the leaderboard under that pseudonym, and their real name will remain confidential. However, the participants must disclose their real identity to the organizers to join check phase and final phase and to claim any prize they might win. See our privacy policy for details.
  • Submission method: The results must be submitted through this CodaLab competition site. The number of submissions per day is 3. Using multiple accounts to increase the number of submissions in NOT permitted. In case of problem, send email to autograph2020@4paradigm.com. The entries must be formatted as specified on the Instructions page.
  • Prizes: The top 10 ranking participants in the Final Phase may qualify for prizes. To compete for prizes, the participants must make a valid submission on the Final Phase website (TBA), and fill out a fact sheet briefly describing their methods before the announcement of the final winners. There is no other publication requirement. The winners will be required to make their code publicly available under an OSI-approved license such as, for instance, Apache 2.0, MIT or BSD-like license, if they accept their prize, within a week of the deadline for submitting the final results. Entries exceeding the time budget will not qualify for prizes. In case of a tie, the prize will go to the participant who submitted his/her entry first. Non winners or entrants who decline their prize retain all their rights on their entries and are not obliged to publicly release their code.
  • Cheating: We forbid people during the development phase to attempt to get a hold of the solution labels on the server (though this may be technically feasible). For the final phase, the evaluation method will make it impossible to cheat in this way. Generally, participants caught cheating will be disqualified.

Dataset

This page describes the datasets used in AutoGraph challenge. 15 graph datasets are prepared for this competition.  5 public datasets, which can be downloaded, are provided to the participants so that they can develop their solutions offline. Besides that, another 5 feedback datasets are also provided to participants to evaluate the public leaderboard scores of their AutoGraph solutions. Afterwards, their solutions will be evaluated with 5 final datasets without human intervention. 

This challenge focuses on the problem of graph representation learning, where node classification is chosen as the task to evaluate the quality of learned representations. 

Note that you can try more datasets to debug your solutions in the open graph benchmark and SNAP project from Stanford University. 

Components

The datasets are collected from real-world business, and are shuffled and split into training and testing parts. Each dataset contains two node files (training and testing), an edge file, a feature file, two label files (training and testing) and a metadata file.
Please note that the data files are read by our program and sent to the participant's program. For the details, please see Evaluations .

  • The training node file (train_node_id.txt) and testing node file (test_node_id.txt) list all node indices used for training and testing correspondingly. The node indices are int type.

    Example:

    node_index
    0
    1
    2
    3
    4
    5
    6
    7
    8
  • The edge file (edge.tsv) contains a set of triplets. A triplet in the form (src_idx, dst_idx, edge_weight) describes a connection from node index src_idx to node dst_idx with the edge weight edge_weight. The type of edge_weight is numerical (float or int)

    Example:

    src_idx	dst_idx	edge_weight
    0	62	1
    0	40	1
    0	127	1
    0	178	1
    0	53	1
    0	67	1
    0	189	1
    0	135	1
    0	48	1
  • The feature file (feature.tsv) is in tsv format. A line of the file is in the format: (node_index f0 f1 ...), where node_index is the index of a node and f0, f1, ... are its features

    The types of features are all numerical

    Example:

    node_index	f0	f1	f2	f3	f4
    0	0.47775876104073356	0.05387578793865644	0.729954200019264	0.6908184238803438	0.9235037015600726
    1	0.34224099072954905	0.6693042243297719	0.08736572053032532	0.07358721227831977	0.27398819586899037
    2	0.8259856025619777	0.4421366756096389	0.9872258141866499	0.4865590790508849	0.12633483872234397
    3	0.11177231902956064	0.40446709473609854	0.2293892960354328	0.4021930454713125	0.40698138834963693
    4	0.34427740190016	0.26622372452918375	0.8042497280547812	0.0022605424347530434	0.8903425653304337
    5	0.08640169107378592	0.43038539444039425	0.6635778390235518	0.9229371884297638	0.8912709075205572
    6	0.6765202023072282	0.9039673560303431	0.986304900152288	0.23661480664770496	0.7140162062880935
    7	0.043651531427249424	0.010090830922163785	0.758404203984433	0.05315076246728134	0.8017402643849966
    8	0.49802375200717	0.6735698429117265	0.04292694482433346	0.3033723691640159	0.43132281219124635
  • The training label file (train_label.tsv) and the testing label file (test_label.tsv) are also in tsv format and contains label information of training and testing nodes correspondingly. A line in the files is in the format: (node_index class), where node_index is the index of a node and class is its label.

    Example:

    node_index  class
    0	1
    1	3
    2	1
    3	1
    4	3
    5	1
    6	1
    7	3
    8	1
  • The metadata file (config.yml) is in yaml format. It provides meta-information of the datasets, including:

    • schema: DEPRECATED
    • the number of label classes in the dataset
    • the time budget of the dataset

    Example:

    time_budget: 5000
    n_class: 7

Evaluation

This challenge has three phases. The participants are provided with 5 public datasets which can be downloaded, so that they can develop their solutions offline. Then, the code will be uploaded to the platform and participants will receive immediate feedback on the performance of their method at another 5 feedback datasets. After Feedback Phase terminates, we will have another Check Phase, where participants are allowed to submit their code only once on final datasets in order to debug. Participants won't be able to read detailed logs but they are able to see whether their code report errors. Last, in Final Phase, participants' solutions will be evaluated on 5 final datasets. The ranking in Final Phase will count towards determining the winners.

Code submitted is trained and tested automatically, without any human intervention. Code submitted on Feedback (resp. Final) Phase is run on all 5 feedback (resp. final) datasets in parallel on separate compute workers, each one with its own time budget. 

The identities of the datasets used for testing on the platform are concealed. The data are provided in a raw form (no feature extraction) to encourage researchers to use Deep Learning methods performing automatic feature learning, although this is NOT a requirement. All problems are node classification problems. The tasks are constrained by the time budget

Here is some pseudo-code of the evaluation protocol:

# For each dataset, our evaluation program calls the model constructor:
# load the dataset
dataset = Dataset(args.dataset_dir)

# get information about the dataset
time_budget = dataset.get_metadata().get("time_budget")
n_class = dataset.get_metadata().get("n_class")
schema = dataset.get_metadata().get("schema")

# import and initialize the participant's Model class
umodel = init_usermodel()

# initialize the timer
timer = _init_timer(time_budget)

# train the model and predict the labels of testing data
predictions = _train_predict(umodel, dataset, timer, n_class, schema)

Metrics

For both Feedback Phase and Final Phase, Accuracy is evaluated on each dataset. The submissions will be ranked by the averaged rank on all datasets of a phase.

Note that if a submission fails on a certain dataset, a default score (-1 in this challenge) will be marked in the corresponding dataset of leaderbaord.

API

The participants should implement a class Model with a class method train_predict, which is described as follows:

class Model:
    """user model"""
    def __init__(self):
        # init 

    def train_predict(self, data, time_budget, n_class, schema):
        """train and prediction
        This method will be called by the competition platform and constraint with time_budget.

        Parameters:
        -----------
        data: dict, store all input data. keys and values are:
            'fea_table':  pandas.DataFrame, features for training and testing dataset,
            'edge_file': pandas.DataFrame, edge information of the graph, dtypes of all columns are int
            'train_indices': list of int, indices of all training nodes
            'test_indices': list of int, indices of all testing nodes
            'train_label': pandas.DataFrame, labels of training nodes
            for the details, please check the format of data files.
        n_class: int, the number of classes in this task
        schema: this is deprecated

        Return
        ------
        pred: list(or pandas.Series / 1D numpy.ndarray)
        pred contains predictions for all testing samples, and they are
        in the same order as test_indices
        """
        return pred

It is the responsibility of the participants to make sure that the "train_predict" method does not exceed the time budget.

FAQs

Can organizers compete in the challenge?

No, they can make entries that show on the leaderboard for test purposes and to stimulate participation, but they are excluded from winning prizes.

Are there prerequisites to enter the challenge?

No, except accepting the TERMS AND CONDITIONS.

Can I enter any time?

No, you can join the challenge until one week before the end of feedback phase. After that, we will require real personal identification (notified by organizers) to avoid duplicate accounts.

Where can I download the data?

You can download "practice datasets" only from the Instructions page. The data on which your code is evaluated cannot be downloaded, it will be visible to your code only, on the Codalab platform.

How do I make submissions?

To make a valid challenge entry, click the blue button on the upper right side "Upload a Submission". This will ensure that you submit on all 5 datasets of the challenge simultaneously. You may also make a submission on a single dataset for debug purposes, but it will not count towards the final ranking.

Do you provide tips on how to get started?

We provide a Starting Kit in Python with step-by-step instructions in "README.md".

Are there prizes?

Yes.

    1st place   2nd place   3rd place
Prize   $15000   $10000   $5000

4th - 10th place: 500 USD each

Do I need to submit code to participate?

Yes, participation is by code submission.

When I submit code, do I surrender all rights to that code to the SPONSORS or ORGANIZERS?

No. You just grant to the ORGANIZERS a license to use your code for evaluation purposes during the challenge. You retain all other rights.

If I win, I must submit a fact sheet, do you have a template?

Yes, we will provide the fact sheet in a suitable time.

What is your CPU/GPU computational configuration?

We are running your submissions on Google Cloud workers, each of which will have one NVIDIA Tesla P100 GPU (running CUDA 10 with drivers cuDNN 7.5) and 4 vCPUs, with 30 GB of memory, 200 GB disk.

The PARTICIPANTS will be informed if the computational resources increase. They will NOT decrease.

Can I pre-train a model on my local machine and submit it?

This is not explicitly forbidden, but it is discouraged. We prefer if all calculations are performed on the server. If you submit a pre-trained model, you will have to disclose it in the fact sheets. 

Will there be a final test round on separate datasets?

YES. The ranking of participants will be made from a final blind test made by evaluating a SINGLE SUBMISSION made on the final test submission site. The submission will be evaluated on five new test datasets in a completely "blind testing" manner. The final test ranking will determine the winners.

What is my time budget?

Each dataset has a predefined time budget associated in the meta information.

Does the time budget correspond to wall time or CPU/GPU time?

Wall time.

My submission seems stuck, how long will it run?

In principle no more than its time budget. We kill the process if the time budget is exceeded. Submissions are queued and run on a first time first serve basis. We are using several identical servers. Contact us if your submission is stuck for more than 24 hours. Check on the leaderboard the execution time.

How many submissions can I make?

3 submissions per day. This may be subject to change, according to the number of participants. Please respect other users. It is forbidden to register under multiple user IDs to gain an advantage and make more submissions. Violators will be DISQUALIFIED FROM THE CONTEST.

What if my submission fails? Do my failed submissions count towards my number of submissions per day?

Failed submissions will be counted. Please contact us if you think the failure is due to the platform rather than to your code and we will try to resolve the problem promptly. If a submission fails, a default score (-1 in this challenge) will be marked in the leaderboard.

What happens if I exceed my time budget?

This should be avoided. In the case where a submission exceeds the time budget for a particular task (dataset), the submission handling process (ingestion program in particular) will be killed when time budget is used up and predictions made so far (with their corresponding timestamps) will be used for evaluation. In the other case where a submission exceeds the total compute time per day, all running tasks will be killed by CodaLab and the status will be marked 'Failed' and a default score will be produced. See previous question for more details.

The time budget is too small, can you increase it?

No, sorry, not for this challenge.

What metric are you using?

Please go to 'Get Started' -> 'Evaluation' -> 'Metrics' section.

Which version of Python are you using?

The code was tested under Python 3.6.8. We are running Python 3.6.8 on the server and the same libraries are available.

Can I use something else than Python code?

Yes. Any Linux executable can run on the system, provided that it fulfills our Python interface and you bundle all necessary libraries with your submission.

Do I have to use TensorFlow?

No. 

Which docker are you running on Codalab?

nehzux/kddcup2020:v2, see some instructions on dockerhub.

How do I test my code in the same environment that you are using before submitting?

When you submit code to Codalab, your code is executed inside a Docker container. This environment can be exactly reproduced on your local machine by downloading the corresponding docker image. The docker environment of the challenge contains common Machine Learning libraries, TensorFlow, and PyTorch (among other things).  

What is meant by "Leaderboard modifying disallowed"?

Your last submission is shown automatically on the leaderboard. You cannot choose which submission to select. If you want another submission than the last one you submitted to "count" and be displayed on the leaderboard, you need to re-submit it.

Can I register multiple times?

No. If you accidentally register multiple times or have multiple accounts from members of the same team, please notify the ORGANIZERS. Teams or solo PARTICIPANTS with multiple accounts will be disqualified.

How can I create a team?

We have disabled Codalab team registration. To join as a team, just share one account with your team. The team leader is responsible for making submissions and observing the results.

Can I join or leave a team?

It is up to you and the team leader to make arrangements. However, you cannot participate in multiple teams.

Can I give an arbitrary hard time to the ORGANIZERS?

ALL INFORMATION, SOFTWARE, DOCUMENTATION, AND DATA ARE PROVIDED "AS-IS". UPSUD, CHALEARN, IDF, AND/OR OTHER ORGANIZERS AND SPONSORS DISCLAIM ANY EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR ANY PARTICULAR PURPOSE, AND THE WARRANTY OF NON-INFRIGEMENT OF ANY THIRD PARTY'S INTELLECTUAL PROPERTY RIGHTS. IN NO EVENT SHALL ISABELLE GUYON AND/OR OTHER ORGANIZERS BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF SOFTWARE, DOCUMENTS, MATERIALS, PUBLICATIONS, OR INFORMATION MADE AVAILABLE FOR THE CHALLENGE. In case of dispute or possible exclusion/disqualification from the competition, the PARTICIPANTS agree not to take immediate legal action against the ORGANIZERS or SPONSORS. Decisions can be appealed by submitting a letter to the CHALEARN president, and disputes will be resolved by the CHALEARN board of directors. See contact information.

Where can I get additional help?

For questions of general interest, THE PARTICIPANTS should post their questions to the forum.

Other questions should be directed to the organizers

Feedback Phase

Start: March 26, 2020, midnight

Description: Please make submissions by clicking on following 'Submit' button. Then you can view the submission results of your algorithm on each dataset in corresponding tab (Dataset 1, Dataset 2, etc).

Datasets:

Color Label Description Start
Dataset 1 None March 26, 2020, midnight
Dataset 2 None March 26, 2020, midnight
Dataset 3 None March 26, 2020, midnight
Dataset 4 None March 26, 2020, midnight
Dataset 5 None March 26, 2020, midnight

Competition Ends

June 9, 2020, 3:59 p.m.

You must be logged in to participate in competitions.

Sign In