Back to blog
    March 20, 2026·AICodingGym Team

    New Challenge Types: Code Review and ML Competitions

    Code Review
    ML

    When we launched AI Coding Gym, we started with LeetCode prompts and Human-SWE-Bench bug fixes — two solid ways to practice AI-assisted coding. But real engineering work involves a lot more than writing code and fixing bugs.

    We've added two new challenge types: Code Review and ML Competitions.

    Code Review

    Most coding platforms train you to write code. Almost none train you to review it.

    Code review is one of the most impactful daily activities for a professional engineer. It requires reading unfamiliar code, understanding intent, spotting subtle bugs, and making judgment calls about design tradeoffs. These are the same skills you need when evaluating AI-generated code — and they're becoming more important, not less.

    Our code review challenges present you with real pull request diffs from open-source repositories. Your job is to identify issues — bugs, performance problems, security concerns — and write review comments explaining what's wrong and why. Each challenge has a set of expected review points, and you're scored on how many you catch.

    The interesting part: using AI to help you review is encouraged. Can you prompt an AI to catch the same issues a senior engineer would? Can you distinguish between AI-flagged noise and real problems? That's the skill being trained.

    Code Review challenge walkthrough

    ML Competitions

    We've integrated MLE-Bench, a collection of Kaggle-style competitions covering image classification, tabular data, NLP, and more. Each challenge comes with a real dataset, a target metric (AUC, RMSE, LogLoss), and context about the original Kaggle competition.

    The approach is the same as everything on AI Coding Gym: you're meant to tackle these with AI. Use it for exploratory data analysis, feature engineering, model selection, hyperparameter tuning. The challenge is knowing what to ask for and how to iterate when results aren't good enough.

    Whether you're an experienced ML practitioner exploring how AI accelerates your workflow or a software engineer trying data science for the first time with AI as a guide, these challenges offer structured practice with real-world datasets.

    ML Competition challenge walkthrough

    Four ways to train

    AI Coding Gym now covers four distinct challenge types, each training a different facet of AI-assisted development:

    • Bug Fix — diagnose and fix real open-source issues
    • Code Review — review pull requests with AI as your co-pilot
    • ML Competitions — tackle data science with AI-powered analysis
    • LeetCode — master prompt-to-solution workflows

    Together, they cover the full range of skills that modern software engineering demands.

    Try the new challenges at aicodinggym.com/challenges.

    Cookie preferences

    We use cookies to understand how visitors use our site. You choose what to allow. Privacy Policy