> booting cloudwalk_nimbus... 100%

> connecting to AI core... 100%

> loading missions... 100%

> initializing interface... 100%

CLOUDWALK

NIMBUS

> Choose your level:

1.1
Kaggle-Slaying Multi-Agent Team

Goal

Build a team of agents that autonomously participate in Kaggle competitions and aim for top 20%.

Tasks

  • Collect data
  • Train models
  • Submit solutions via API
  • Monitor leaderboard

Requirements

Bonus Features

Hints

1.2
How Many Pools in Sao Paulo?

Goal

Estimate the number of swimming pools in São Paulo using ML + satellite images.

Tasks

  • Sample rooftops from Google Maps or INPE
  • Train a detector (>0.65 mAP) for pools
  • Use statistics to extrapolate total count

Requirements

Bonus Features

  • Folium map with pool density
  • District-wise comparison

Hints

1.3
Voynich Manuscript Decoder Challenge

Goal

Use AI to decode the Voynich Manuscript, one of the most mysterious books in the world, written in an unknown script and language.

Tasks

  • Build a pipeline that ingests transcribed Voynich text (EVA or Takahashi transcription)
  • Use LLMs, embeddings, or custom models to find patterns, possible meanings, or linguistic structures
  • Try to match parts of the text with known languages, glyph frequencies, or hypothesized semantics

Requirements

  • Use AI reasoning to explore unknown language or construct hypotheses
  • Provide clear logs of your process
  • Explain why you believe your approach may uncover meaning

Bonus Features

  • Visual overlay of decoded terms on manuscript images
  • Model fine-tuned on similar ciphered texts
  • Timeline of symbol usage evolution across manuscript pages

Hints

1.4
AnyChain Transaction Assistant

Goal

Build a configurable AI assistant that explains and troubleshoots EVM transactions and Solidity smart contracts. The tool must be network-agnostic: it should work on any EVM network (e.g., Ethereum mainnet) and be re-targetable to another network (e.g., CloudWalk private) by changing configuration only (no code changes).

Tasks

  • Transaction explainer
    • Input a transaction hash and return a clear summary of what happened (calls, value transfers, emitted events).
  • Failure diagnostics
    • If the transaction failed/reverted, return a diagnosis, likely root causes, and actionable next steps.
  • Explorer integration (Blockscout-compatible)
    • Fetch tx/receipt/logs (and contract metadata when available) from a configurable Blockscout explorer API.
  • On-chain context via read calls
    • Use a configurable RPC endpoint to perform read-only calls (eth_call) to gather contract state needed for context.
  • Smart contract repo grounding
    • Use one or more configured smart contract repositories (GitHub) to explain contract/function behavior and provide relevant security notes.
  • Simple interface
    • Provide a minimal UI (web preferred) or CLI + local API server.

Requirements

  • Configuration-driven portability
    • Switching networks must be possible by editing config only.
    • Config must include:
      • explorer base URL (Blockscout-compatible)
      • RPC URL
      • one or more repo URLs
      • ABI/decoding strategy: prefer explorer ABI, fallback to repo artifacts/ABIs, otherwise degrade gracefully
  • Grounded answers
    • Include citations/links to sources used (explorer links/endpoints, repo paths/commits, docs).
  • Graceful degradation
    • If ABI/RPC/explorer data is missing/unavailable, clearly state uncertainty and what's needed to proceed.
  • Lightweight & deployable
    • Must run locally. Provide clear run instructions.
  • README
    • Include setup + configuration example + at least 3 sample conversations (or example queries/outputs).

Bonus Features

  • Structured triage flow (asks clarifying questions before concluding)
  • Modes (developer / support / auditor)
  • Multi-transaction analysis (trace related transactions)
  • Gas optimization suggestions
  • Security vulnerability detection based on known patterns

Hints

Submission

  • Choose the challenge that interests you the most and, once completed, send the full project to [email protected].
  • Did you find these challenges boring? Send us your new challenge proposal to the same email.
2.1
Generalist Behavior with Isaac GR00T N1.5

Goal

Enable the Unitree G1 humanoid to exhibit generalist, high-level behaviors by integrating it with NVIDIA Isaac GR00T, a vision-language-action foundation model for humanoids. The robot should understand multimodal inputs and generate coherent actions in simulation.

Tasks

  • Integrate the G1 URDF with NVIDIA Isaac GR00T.
  • Build a demo where natural-language or image-based inputs trigger corresponding robot actions.
  • Implement and document a fine-tuning pipeline for adapting GR00T to the G1 embodiment.
  • Show an end-to-end execution from multimodal command to simulated action.

Requirements

  • Working Isaac Sim/Isaac Lab setup with the G1 URDF.
  • At least one multimodal input interpreted and executed.
  • Clear documentation of the fine-tuning approach.

Bonus Features

  • Multiple distinct commands successfully executed.
  • Evidence that fine-tuning pipeline is extensible (ablation tests, dataset curation plan).
  • Training checkpoints or partial fine-tuning included.

Hints

2.2
Reinforcement Learning with Game Engines

Goal

Build a custom simulated environment for humanoid locomotion or navigation using a game engine (Unity/Unreal), then train an RL agent (PPO, SAC, etc.) to achieve complex motor behaviors such as running, obstacle traversal, or expressive motions.

Tasks

  • Configure Unity ML-Agents (or equivalent) with the G1 humanoid.
  • Define a custom locomotion/navigation task with clear goal conditions.
  • Train an RL agent and evaluate learned behavior in simulation.
  • Show consistent autonomous task completion.

Requirements

  • Functional G1 humanoid environment in Unity or another engine.
  • RL training pipeline using standard algorithms.
  • At least one behavior demonstrated successfully and consistently.

Bonus Features

  • Learning curves showing policy improvement.
  • Robustness testing with domain randomization or dynamic variations.
  • Support for multiple tasks or behaviors in the same environment.

Hints

2.3
Creative IL + RL Pipeline

Goal

Develop a hybrid Imitation Learning + Reinforcement Learning pipeline that enables the Unitree G1 humanoid robot to imitate diverse human motions (walking, jumping, dancing…) and creatively blend them. RL should refine these motions to ensure balance, robustness, and smooth transitions in simulation.

Tasks

  • Use mocap datasets (e.g., Unitree Human Motion Dataset) or teleoperation to collect demonstrations.
  • Train an imitation model (e.g., Behavior Cloning, GAIL) to replicate motions.
  • Extend training with RL (PPO, SAC, etc.) to improve stability and enable transitions.
  • Demonstrate G1 performing at least three distinct behaviors and transitions.

Requirements

  • End-to-end pipeline from demonstration → policy → simulation.
  • Robot reproduces multiple distinct motion sequences.
  • RL fine-tuning demonstrably improves performance.

Submission: Provide source code, datasets used or references, checkpoints, README, and technical docs. Include videos, logs, and plots showing multi-skill behaviors and transitions.

Bonus Features

  • Comparison of pure imitation vs IL+RL performance.
  • Robust transitions under randomization (terrain, sensor noise).
  • Training/validation curves showing creativity or novelty.

Hints

Submission

  • Provide code/configs, checkpoints, README, technical documentation, and experiment evidence (videos, plots, logs). Ensure the repository is reproducible end-to-end.
  • Choose the challenge that interests you most and, once completed, send the full project to [email protected].

We’ve received your submission. You’ll receive further instructions by email soon.

Oops! Something went wrong while submitting the form.