Stop learning about AI.
Start building with it.
A 6-session, cohort-based bootcamp for working professionals over 3 weeks. Pick your track, build a real project, ship it with a GitHub repo you can show. No slides. No theory dumps. You build from session one.
- Small cohort of practicing engineers
- Build with dbt, Terraform, SQL, Python, and real cloud workflows
- No payment today
// 6 sessions. 3 weeks. Ship a real project.
What you'll build
Foundations
Map AI capabilities to the data engineering lifecycle. Learn prompt patterns for schema-aware queries, Terraform config generation, code review, and debugging -- with real tasks from your work.
Pipelines & Transformations
Build a complete ELT pipeline with AI assistance. Ingest data from an API, land it in a warehouse, and write dbt models to transform it. Ship staging models and tested SQL by end of session.
Infrastructure as Code
Use AI to write and iterate on Terraform configs. Provision cloud resources, set up IAM roles, configure networking -- and learn to review AI-generated IaC for security and cost. Deploy real infrastructure to GCP.
Agents & Monitoring
Build AI-powered pipeline monitoring: automated schema drift detection, data quality checks, anomaly classification via Claude API, and alerting when something breaks. Test failure scenarios live.
Quality, Security & Reliability
Make your pipeline production-ready. AI-assisted testing, security auditing, error handling with retries, CI/CD via GitHub Actions, and a 12-item production readiness checklist.
Ship It -- Demo Day
Present your capstone: a production-grade data pipeline with version-controlled IaC, automated tests, monitoring, and documentation. Lightning talks, peer feedback, and a GitHub repo that demonstrates real engineering.
Pick your track
Every track follows the same 5-session structure. Your capstone project is in your domain, with your tools.
Data Engineering
Build and deploy data pipelines with AI assistance using Python, SQL, Terraform, dbt, and BigQuery.
Security
AI-powered vulnerability scanning, security audits, compliance automation, and threat analysis workflows.
Builders
Full-stack applications with AI. Build and deploy apps that use LLMs, agents, and AI APIs in production.
Marketing
Content generation, campaign automation, audience analysis, and AI-driven creative workflows at scale.
What you could ship
Warehouse ELT Project
An API-to-warehouse pipeline with dbt models, tests, and documentation.
- Useful if you want a practical project close to day-to-day analytics engineering work.
- Shows that AI helped you move faster without skipping review discipline.
Terraform-Based Data Stack
A cloud setup with IAM, warehouse resources, and version-controlled IaC.
- Good fit if you want stronger judgment around AI-generated infrastructure work.
- Useful proof for platform-minded or senior data roles.
Monitoring and Reliability Layer
Schema checks, anomaly detection, alerting, and production-readiness workflows.
- Strong fit if you care about maintainability, quality, and catching brittle pipeline behavior early.
- The exact project can be shaped around your current stack and constraints.
Who this track fits
A strong fit if you want to
- Use AI inside pipelines, warehouse work, and infrastructure instead of keeping it in side experiments.
- Review AI-generated SQL and Terraform with better judgment, not blind trust.
- Leave with a repo that shows production-style data engineering work.
Not the right fit if you want
- A beginner-friendly introduction to data engineering from scratch.
- A theory-heavy overview of AI trends without hands-on implementation.
- A no-code class where you never touch SQL, Python, or IaC.
The kind of cohort this is built for
Practicing Data Engineers
Data engineers, analytics engineers, and hands-on data leads already working with SQL, Python, dbt, warehouses, or cloud tooling.
- People trying to use AI inside real platform work, not just side experiments.
- Comfortable enough with code and infra to review what AI generates critically.
High-Signal Room
The goal is a small cohort where reliability, review judgment, and stack fit matter.
- The waitlist helps filter for engineers with real workflow pain, not broad curiosity.
- The first cohort is shaped around the stacks and project goals people submit.
How it works
Live Sessions
Instructor-led, hands-on sessions. Not lectures. You build during class with real-time guidance and code review.
Async Challenges
Between sessions, work on challenges that build toward your capstone. Office hours available when you get stuck.
Community
A cohort of peers building alongside you. Shared workspace, peer review, and a network that lasts beyond the bootcamp.
Timeline
6 sessions over 3 weeks, twice per week. Each session is 90 minutes live plus async challenges. Pre-work setup before Session 1. Designed for people with day jobs.
Capstone
You ship a real project in your domain. GitHub repo, live demo, peer feedback. Something you can actually show.
What Happens After You Join
- You get the first cohort window before the public launch announcement.
- You get a tighter capstone brief so you can judge whether the room matches your level.
- Your role and stack help shape the first Data Engineering cohort.
Early access
Cohort 1 launches when we hit 20 engineers. Join the waitlist to get cohort dates, curriculum updates, and early-bird access before enrollment opens publicly.
- First access to cohort dates before public launch.
- A tighter curriculum snapshot for the data engineering capstone.
- Early-bird pricing and sponsored-spot updates when enrollment opens.
No payment required -- I'll email you when the cohort date is set.
Can't afford it? We offer a limited number of sponsored spots for each cohort. Talk to us →
Who's teaching
Anmol Parimoo
I build data foundations and production AI workflows for operating teams -- warehouse modeling, dbt, Terraform, cloud setup, and the application layer around them. This track is based on how I do that work in client environments through MLDeep Systems.
You'll learn the parts that usually get skipped: how to review AI-generated SQL and IaC, how to catch brittle pipeline logic early, and how to ship data systems that are actually maintainable after the first demo.
Want to teach a session? We're looking for practitioners. Get in touch →
Common questions
Ready to build?
Join the list for the data engineering cohort and hear first when enrollment opens publicly.