Frequently Asked Questions
Welcome to our FAQ section!
We've compiled answers to common questions about the Zindi competition. If you have a question that's not covered here, feel free to reach out to us for assistance.
The fastest way to get your support query resolved is to engage with our team directly:
What is the current Zindi MLOPs competition about?
Overview: This competition focuses on the capabilities of MLOps in developing a machine translation model optimized for practical deployment considerations, including throughput, latency, accuracy, and cost. The goal is to create a model capable of translating content from French into Dyula across various domains. Your focus will be on a unique use case: AI Student Learning Assistant (AISLA) - a free learning tool designed to assist students in conversing and learning in their native language.
How do I register for the Zindi competition?
Please refer to the Register section guiding you through the steps of Zindi and Highwind registration.
When does registration for this competition close?
Registration for this competition closes on 19 August 2024 at 23:59 PM GMT.
When does the competition end?
The competition closes on 1 September at 23:59 PM GMT, in which final submissions must be submitted to Zindi by 1 September at 23:58 PM GMT to be considered for scoring.
How will my solution be evaluated?
In this MLOps challenge, evaluation differs from typical Zindi competitions. Please refer to the Submit section for detailed guidance. Your solution will be assessed across eight specific areas outlined on the Zindi competition card: melio-mlops-competition
Where can I find the dataset for the competition?
The Dataset can be found on either the Melio MLOps Machine Translation Challenge, specfically within the tab called
Data
or on the Highwind Marketplace.
Please ensure you login into Highwind
What file formats are allowed when uploading a Dataset Asset onto Highwind?
The
Dataset
format needs to be CSV to avoid any issues when downloading the Dataset Asset
.
What is the maximum number of submissions allowed?
Submissions are limited to a maximum of 50 submission attempts for testing purposes.
As this is an MLOps competition, we expect you to follow good experiment tracking methodologies. Your final submission will determine your score, so ensure to save your latest submission. Resubmit your best work as your final submission to ensure you put your best foot forward.
What distinguishes the private leaderboard from the public leaderboard?
Zindi hosts both public and private leaderboards for each competition. These leaderboards evaluate solutions based on the specified Evaluation Criteria, which include various metrics and weighting percentages. The public leaderboard utilizes a validation set, while the private leaderboard employs a test set.
What is the maximum number of people allowed in a group?
In this competition, participants are not allowed to form groups. Each participant must compete individually. Therefore, the maximum number of people allowed in a group is zero, as group participation is not permitted.
Who can I contact for support or questions?
If your inquiry pertains to Highwind, please contact the Highwind team through the appropriate support channels via the Highwind Discord Server. However, if you need assistance related to Zindi, please reach out directly to the Zindi team via the Zindi discussion page or via email on zindi@zindi.africa.
Where can I log a Highwind issue or feature request?
To report an issue or make a feature request for Highwind, please use our GitHub Issue Tracker.
I did not receive a welcome email or password reset email. What should I do?
Ensure you have registered for the Zindi competition by following the registration instructions. To reset your password, use your Zindi username (not your email address) to link your deployment with your Zindi submission.
Why am I encountering issues logging into AWS with Docker on Windows?
When using Docker on Windows, AWS login credentials are stored in Windows Credential Manager, which has a character limit per credential. AWS login details frequently exceed this limit, requiring users to implement workarounds. Unfortunately, this has been an ongoing issue with Docker since 2020. To resolve this issue, you can explore the following resources for potential workarounds: Docker Login Issue On Windows When Deploying Model, Docker Login AWS Error and Best Practices When Working With Docker for Machine Learning.
I get an "unknown scoring failure" when I submit my solution onto Highwind, how can I resolve the issue?
Please see the following three potential causes for this error along with their resolution options
Incorrect folder structure
- Make sure your submission follows the correct
.zip
folder structure as specified in the competition guidelines. - Detailed structure guidance can be found on the Zindi competition page under the Submission section.
- Do not submit the csv file.
Missing image_name.txt
- Add image_name.txt file
- Check out the following page for instructions on how to add an image Get the Image URI and view push commands.
Image not Deployable
-
Reason 1: Incorrect
main.py
Structure- Ensure your
main.py
file accepts a--model-name
argument correctly without manual intervention. - Follow the guidelines and examples provided in Highwind's deployment tutorials.
- Check discussions on Zindi for specific tips: Zindi discussions.
- Ensure your
-
Reason 2: Dockerfile
ENTRYPOINT
Not Set- Verify that your Dockerfile has the
ENTRYPOINT
correctly configured. - Example configurations can be found in repositories like Highwind's GitHub examples.
- Verify that your Dockerfile has the
-
Reason 3: Container Start Failure Due to Resource Constraints
- Ensure your container is not failing to start because of insufficient resources.
- Review the submission limitations to meet the required specifications.
The scoring takes too long and it’s timed out. What should I do?
Reduce the latency of your model. You can check the response time from the model using Kserve, which will show the latency.
How can I view my score without making a submission?
Please see the following resource videos: Highwind Deployment Walk-through and Model Grading and Submitting to Zindi
How can I self-assess the resource limitations?
Running your inference locally will use your machine’s resources. You can use Docker Compose to set up limits to simulate the evaluation limits. Check out the examples with Docker Compose files that can help.
What format does my model need to be in?
Your model can be saved in any format, as long as you can call it within your Docker image. Refer to this example to learn how to package and serve your models effectively.