Skip to main content

Docker Deep Dive with Generative AI

·1297 words·7 mins
Michael Antonio Tomaylla
Author
Michael Antonio Tomaylla
From technical complexity to simplicity that creates value
Cover — Docker and Generative AI

Introduction: why Docker and why now
#

Let’s be honest: if you’ve ever heard “it works on my machine,” you deserve applause… and a container. Following me? Good. Today we’re taking a practical journey through Docker —yes, the one that saves the relationship between devs and ops— and we’ll also see how generative AI is entering the ecosystem. Spoiler: it’s not magic, it’s engineering (and a bit of console charm).

If you’re looking for portability, reproducibility, and fewer “it works on my laptop” moments in retrospectives, Docker is your best ally. Containerization packages your app with everything it needs; you run it on any host with a Docker engine and voilà: fewer surprises in production.

Now, with generative AI integrated into the Docker ecosystem, you don’t just automate builds and scans: you get assistants that suggest optimizations, help audit dependencies, and can run local models to reduce latency. We’re in the era where your container can even consult a model before responding.

Useful fact: according to CNCF, container usage in production is through the roof (over 90% in surveys). It’s not a fad; it’s the base layer of many modern architectures.

Fundamentals — What is containerization?

Fundamentals: what is containerization and why you should care
#

Containerization = packaging your application + runtime + libraries + configuration into an immutable image. That image runs as a container: isolated process, own file system, and isolated network.

  • Consistent environment between dev/test/prod
  • Scalability through replicas
  • Minimal images = less attack surface
  • Service isolation
  • Portability: “if it runs in Docker, it runs anywhere”
  • Fast startup: less time watching logs, more time celebrating

My personal rule: if I do a manual task more than 3 times, I dockerize it. Like the DRY principle but for infrastructure: Don’t Repeat Yourself… deployments.

Tools and practical workflow

Tools and practical workflow
#

Docker Init — quick start to dockerize a project
#

What it does: docker init creates initial files to dockerize your project: Dockerfile, compose.yaml, .dockerignore and README.Docker.md. It’s deterministic and applies best practices by default.

Basic workflow:

  1. Go to the project root.
  2. Run: docker init
  3. Answer questions about platform, version, port, and command.

Real case: in a Java/Spring Boot project, docker init leaves an optimized Dockerfile (multi-stage, JAR copy, proper JVM configuration) and a basic compose for development with database. It makes your job easier without taking away control.

Docker Bake — multi-image and multi-platform builds
#

What it’s for: Build multiple images with a single command using Buildx and support multi-architecture (linux/amd64, linux/arm64).

Command:

docker buildx bake

Example docker-bake.hcl:

group "default" {
  targets = ["api", "worker"]
}

target "api" {
  context = "./api"
  dockerfile = "Dockerfile"
  platforms = ["linux/amd64", "linux/arm64"]
}

target "worker" {
  context = "./worker"
  dockerfile = "Dockerfile"
  platforms = ["linux/amd64"]
}

Advantages: parallel builds, ARM support (hello Raspberry Pi and ARM cloud nodes) and progress visualization in Docker Desktop.

Practical tip: add ARM support if you plan to run on ARM instances or edge. And test various runtime versions: sometimes a minor patch changes everything.

SBOM — your software inventory (yes, it matters)
#

Supply chain security isn’t just for compliance officers. It’s for anyone who wants to know exactly what libraries and versions are in the image.

Commands:

docker sbom <image:tag>
docker sbom --format spdx-json <image:tag> > sbom.spdx.json

Recommendation: generate SBOMs in your CI/CD pipeline and save them as artifacts. When the auditor knocks (or the SOC), don’t improvise.

Docker Scout — security scanning and recommendations
#

What it is: Docker Scout analyzes images for CVEs, misconfigurations, and offers actionable recommendations.

Example command:

docker scout cves <image:tag>

What you get: list of vulnerable packages and their severity, mitigation recommendations, and visual integration in Docker Desktop.

CI usage: run Scout and block deployments if critical CVEs appear. Like a bouncer at the club door: if you don’t meet security requirements, you don’t get into production.

Docker Compose — local orchestration without complications
#

Compose allows you to define multi-container stacks with YAML. Ideal for reproducing development environments.

Common commands:

  • docker compose up — Start up
  • docker compose up --build — Rebuild and start up

Simple example (API + Postgres):

services:
  api:
    build: ./api
    ports:
      - "8080:80"
    depends_on:
      - db
  db:
    image: postgres:15
    environment:
      POSTGRES_PASSWORD: example
    volumes:
      - db-data:/var/lib/postgresql/data

volumes:
  db-data:

Advantage: quick onboarding for new devs —one command and you have everything running. Magic? No, compose.yaml.

Generative AI in Docker — Gordon, MCP and Docker Model Runner

Generative AI in Docker: Gordon, MCP and Docker Model Runner (DMR)
#

Docker is also integrating LLMs sensibly: improving developer experience, helping with reviews, and automating tasks.

Ask Gordon
#

AI agent integrated into Docker Desktop/CLI (beta). You can ask: show me the containers or review this Dockerfile and suggest improvements. Useful for quick wins: layer suggestions, caching, and size reduction.

MCP (Model Context Protocol) Servers
#

Docker can talk to external or proprietary MCP servers. Allows integrating custom models and AI tools. They’re defined with Compose-like YAML (e.g., Gordon-MCP.yaml) to connect different MCP servers.

Gateway and integration: MCP Gateway connects clients (IDE, Copilot) to a hub of MCP servers, centralizing models and access control. Ideal for companies managing internal models and wanting control over their use.

Practical cases: Ask Gordon suggests Dockerfile optimizations; MCP automates test generation, documentation, and security reviews. Yes, a copilot that doesn’t complain about coffee makes the day more bearable.

Docker Model Runner (DMR) — run models locally (beta)
#

What it is: DMR allows running generative AI models locally within the Docker ecosystem. Ideal for reducing latency and maintaining control over data.

Availability: Linux: Docker CE + plugin; macOS: Docker Desktop 4.40+; Windows: Docker Desktop 4.41+

Interactive command:

docker model run ai/gemma3

This opens a conversational session in your terminal with the model. It’s like chatting, but without tabs or notifications.

Compose integration: Compose can include a top-level models section. Example:

models:
  gemma3:
    image: oci.registry/ai/gemma3:latest
    context_size: 2048
    runtime_flags:
      - --gpu

This declares that your service depends on a local model; the runtime handles provisioning it where possible. Portability and API: models executed by DMR can expose OpenAI SDK-compatible APIs, facilitating integration into existing applications.

Practical note: large models may need GPU or special architectures. If your laptop complains, move it to a GPU node.

Best practices and recommendations
#

  • Start with docker init for a solid foundation.
  • Generate SBOMs and mount them in your CI/CD pipeline.
  • Integrate Docker Scout to block deployments with critical CVEs.
  • Use docker buildx bake for multi-platform builds.
  • Compose remains your best friend for reproducible local development.
  • Consider DMR if you need local models for latency or privacy.
  • Version and publish images with clear tags; don’t use “latest” as an existential excuse.

Honest opinion: AI is useful, but doesn’t replace good practices. An assistant that suggests optimizations doesn’t substitute human review in critical code.

Conclusion and next steps
#

Docker remains the backbone for deploying modern applications. With docker init, bake, SBOMs, Scout, Compose, and new AI integrations (Gordon, MCP, DMR), your workflow becomes more secure, fast, and automatable. If you want your CI to have fewer surprises on Monday morning, these steps help.

Practical next steps:

  1. Try docker init on an existing project.
  2. Add SBOM to your pipeline and run docker scout.
  3. Experiment with docker buildx bake for multi-architecture builds.
  4. If you work with ML, check out Docker Model Runner and the models section of Compose.

Readings and resources
#

Want ready-to-copy/paste examples? I can share:

  • Optimized Dockerfiles (multi-stage and cache-friendly)
  • A complete docker-bake.hcl for a monorepo
  • A compose.yaml with models ready to try DMR

Thanks for making it this far. If you’re still reading, you’re officially my favorite. And if your team needs help dockerizing their monolith or setting up a pipeline with SBOMs and automatic scans… pass me the coffee and let’s make art.