AI workflows bootcamp

Stop learning about AI.
Start building with it.

A 6-session, cohort-based bootcamp for working professionals over 3 weeks. Pick your track, build a real project, ship it with a GitHub repo you can show. No slides. No theory dumps. You build from session one.

  • Small cohort of practicing engineers
  • Build with dbt, Terraform, SQL, Python, and real cloud workflows
  • No payment today

// 6 sessions. 3 weeks. Ship a real project.

Curriculum

What you'll build

Foundations

Map AI capabilities to the data engineering lifecycle. Learn prompt patterns for schema-aware queries, Terraform config generation, code review, and debugging -- with real tasks from your work.

Pipelines & Transformations

Build a complete ELT pipeline with AI assistance. Ingest data from an API, land it in a warehouse, and write dbt models to transform it. Ship staging models and tested SQL by end of session.

Infrastructure as Code

Use AI to write and iterate on Terraform configs. Provision cloud resources, set up IAM roles, configure networking -- and learn to review AI-generated IaC for security and cost. Deploy real infrastructure to GCP.

Agents & Monitoring

Build AI-powered pipeline monitoring: automated schema drift detection, data quality checks, anomaly classification via Claude API, and alerting when something breaks. Test failure scenarios live.

Quality, Security & Reliability

Make your pipeline production-ready. AI-assisted testing, security auditing, error handling with retries, CI/CD via GitHub Actions, and a 12-item production readiness checklist.

Ship It -- Demo Day

Present your capstone: a production-grade data pipeline with version-controlled IaC, automated tests, monitoring, and documentation. Lightning talks, peer feedback, and a GitHub repo that demonstrates real engineering.

Who it's for

Pick your track

Every track follows the same 5-session structure. Your capstone project is in your domain, with your tools.

Data Engineering

Build and deploy data pipelines with AI assistance using Python, SQL, Terraform, dbt, and BigQuery.

Security

AI-powered vulnerability scanning, security audits, compliance automation, and threat analysis workflows.

Builders

Full-stack applications with AI. Build and deploy apps that use LLMs, agents, and AI APIs in production.

Marketing

Content generation, campaign automation, audience analysis, and AI-driven creative workflows at scale.

Capstone examples

What you could ship

Warehouse ELT Project

An API-to-warehouse pipeline with dbt models, tests, and documentation.

  • Useful if you want a practical project close to day-to-day analytics engineering work.
  • Shows that AI helped you move faster without skipping review discipline.

Terraform-Based Data Stack

A cloud setup with IAM, warehouse resources, and version-controlled IaC.

  • Good fit if you want stronger judgment around AI-generated infrastructure work.
  • Useful proof for platform-minded or senior data roles.

Monitoring and Reliability Layer

Schema checks, anomaly detection, alerting, and production-readiness workflows.

  • Strong fit if you care about maintainability, quality, and catching brittle pipeline behavior early.
  • The exact project can be shaped around your current stack and constraints.
Fit

Who this track fits

A strong fit if you want to

  • Use AI inside pipelines, warehouse work, and infrastructure instead of keeping it in side experiments.
  • Review AI-generated SQL and Terraform with better judgment, not blind trust.
  • Leave with a repo that shows production-style data engineering work.

Not the right fit if you want

  • A beginner-friendly introduction to data engineering from scratch.
  • A theory-heavy overview of AI trends without hands-on implementation.
  • A no-code class where you never touch SQL, Python, or IaC.
Who's in the room

The kind of cohort this is built for

Practicing Data Engineers

Data engineers, analytics engineers, and hands-on data leads already working with SQL, Python, dbt, warehouses, or cloud tooling.

  • People trying to use AI inside real platform work, not just side experiments.
  • Comfortable enough with code and infra to review what AI generates critically.

High-Signal Room

The goal is a small cohort where reliability, review judgment, and stack fit matter.

  • The waitlist helps filter for engineers with real workflow pain, not broad curiosity.
  • The first cohort is shaped around the stacks and project goals people submit.
Format

How it works

Live Sessions

Instructor-led, hands-on sessions. Not lectures. You build during class with real-time guidance and code review.

Async Challenges

Between sessions, work on challenges that build toward your capstone. Office hours available when you get stuck.

Community

A cohort of peers building alongside you. Shared workspace, peer review, and a network that lasts beyond the bootcamp.

Timeline

6 sessions over 3 weeks, twice per week. Each session is 90 minutes live plus async challenges. Pre-work setup before Session 1. Designed for people with day jobs.

Capstone

You ship a real project in your domain. GitHub repo, live demo, peer feedback. Something you can actually show.

What Happens After You Join

  • You get the first cohort window before the public launch announcement.
  • You get a tighter capstone brief so you can judge whether the room matches your level.
  • Your role and stack help shape the first Data Engineering cohort.
Join the waitlist

Early access

Cohort 1 launches when we hit 20 engineers. Join the waitlist to get cohort dates, curriculum updates, and early-bird access before enrollment opens publicly.

  • First access to cohort dates before public launch.
  • A tighter curriculum snapshot for the data engineering capstone.
  • Early-bird pricing and sponsored-spot updates when enrollment opens.

No payment required -- I'll email you when the cohort date is set.

Can't afford it? We offer a limited number of sponsored spots for each cohort. Talk to us →

Instructor

Who's teaching

Anmol Parimoo

Anmol Parimoo

I build data foundations and production AI workflows for operating teams -- warehouse modeling, dbt, Terraform, cloud setup, and the application layer around them. This track is based on how I do that work in client environments through MLDeep Systems.

You'll learn the parts that usually get skipped: how to review AI-generated SQL and IaC, how to catch brittle pipeline logic early, and how to ship data systems that are actually maintainable after the first demo.

LinkedIn

Want to teach a session? We're looking for practitioners. Get in touch →

FAQ

Common questions

You should be comfortable with your domain tools (a code editor, a spreadsheet, a marketing platform -- whatever you use daily). You don't need AI experience. We start from foundations and build up.
Depends on your track. The Builders and Data Engineering tracks involve code. Security and Marketing tracks focus on AI-assisted workflows in your existing tools. All tracks involve some interaction with AI coding tools, but you don't need to be a developer.
A laptop and an internet connection. We'll provide access to AI tools and walk through setup in Session 1. Specific tool requirements vary by track and will be shared before the cohort starts.
Sessions are recorded. You can catch up async and still complete your capstone. To qualify for the completion reward, you need to attend at least 5 of 6 sessions live and present your capstone.
Join the waitlist to be notified when the next cohort opens. We run small cohorts to keep sessions hands-on and high-signal.

Ready to build?

Join the list for the data engineering cohort and hear first when enrollment opens publicly.