ServerlessBase Blog
  • What are Containers? A Beginner's Introduction

    Containers provide lightweight, isolated environments for running applications with consistent behavior across different systems.

    What are Containers? A Beginner's Introduction

    You've probably heard developers talk about containers, Docker, and how they've revolutionized application deployment. But what exactly are containers, and why should you care? If you've worked with virtual machines, you have a head start. If not, don't worry — we'll break it down.

    Containers are essentially isolated environments that package an application and everything it needs to run: code, runtime, system tools, system libraries, and settings. Unlike traditional virtual machines, containers don't include a full operating system. Instead, they share the host system's kernel, making them incredibly lightweight and fast.

    The Container vs. Virtual Machine Comparison

    To understand containers, it helps to compare them with virtual machines, which most people are already familiar with.

    FactorVirtual MachineContainer
    Size1-10 GB (full OS)MBs to a few hundred MBs
    Startup TimeMinutesMilliseconds to seconds
    Resource OverheadHigh (full OS)Low (shared kernel)
    IsolationStrong (full VM)Strong (namespaces/cgroups)
    PortabilityGood (VM image)Excellent (image format)
    Use CaseFull OS-level isolationApplication isolation

    Virtual machines run a complete guest operating system, which is why they're so large and slow to start. Containers, on the other hand, run a single process and share the host kernel. This shared kernel is the key to their efficiency.

    How Containers Work: The Technical Foundation

    Containers rely on three core Linux kernel features to achieve isolation:

    Namespaces

    Namespaces provide isolation at the system level. Each container gets its own view of the system:

    • PID namespace: Each container sees its own process list
    • Network namespace: Each container has its own network stack (IP addresses, ports, etc.)
    • Mount namespace: Each container has its own filesystem view
    • UTS namespace: Each container has its own hostname
    • User namespace: Each container has its own user and group IDs
    • IPC namespace: Each container has its own inter-process communication mechanisms

    Cgroups

    Control groups (cgroups) limit and isolate system resources:

    • CPU usage limits
    • Memory limits
    • Disk I/O limits
    • Network bandwidth limits

    Union File Systems

    Union file systems (like OverlayFS) allow containers to share the same filesystem layers while maintaining isolation. This is why containers are so small — they don't copy files, they reference them.

    Container Images and Layers

    A container image is a read-only template that includes everything needed to run an application. Images are built from a Dockerfile, which is a text file containing instructions for building the image.

    # Example Dockerfile
    FROM node:18-alpine
     
    WORKDIR /app
     
    COPY package*.json ./
    RUN npm install
     
    COPY . .
     
    EXPOSE 3000
     
    CMD ["node", "server.js"]

    This Dockerfile does several things:

    1. FROM node:18-alpine: Starts with a base image containing Node.js 18 and Alpine Linux
    2. WORKDIR /app: Sets the working directory inside the container
    3. *COPY package.json ./**: Copies package files to the container
    4. RUN npm install: Installs dependencies
    5. COPY . .: Copies the application code
    6. EXPOSE 3000: Documents that the application listens on port 3000
    7. CMD ["node", "server.js"]: Specifies the command to run when the container starts

    Images are built in layers. Each instruction in the Dockerfile creates a new layer. This is important because Docker caches these layers, so if you change only one layer, Docker doesn't need to rebuild the layers below it.

    Running Your First Container

    Let's walk through a practical example. Suppose you have a simple Node.js application in a directory called my-app:

    // my-app/server.js
    const http = require('http');
     
    const server = http.createServer((req, res) => {
      res.statusCode = 200;
      res.setHeader('Content-Type', 'text/plain');
      res.end('Hello from a container!\n');
    });
     
    server.listen(3000, () => {
      console.log('Server running on port 3000');
    });

    First, create a package.json file:

    {
      "name": "my-app",
      "version": "1.0.0",
      "main": "server.js",
      "dependencies": {
        "express": "^4.18.2"
      }
    }

    Now create a Dockerfile in the same directory:

    FROM node:18-alpine
     
    WORKDIR /app
     
    COPY package*.json ./
    RUN npm install
     
    COPY . .
     
    EXPOSE 3000
     
    CMD ["node", "server.js"]

    Build the image:

    docker build -t my-app:1.0 .

    Run the container:

    docker run -p 3000:3000 my-app:1.0

    Now visit http://localhost:3000 in your browser. You should see "Hello from a container!".

    Container Registries

    When you build a container image, you typically want to store it somewhere so others can use it. Container registries are the equivalent of package managers for containers.

    Docker Hub is the most popular public registry. You can push your images to Docker Hub or use a private registry for internal images.

    # Login to Docker Hub
    docker login
     
    # Tag your image
    docker tag my-app:1.0 username/my-app:1.0
     
    # Push to Docker Hub
    docker push username/my-app:1.0

    Container Orchestration

    Running a single container is useful, but what about running hundreds or thousands of containers? That's where container orchestration comes in.

    Kubernetes is the industry-standard orchestration platform. It handles:

    • Scaling: Automatically adding or removing containers based on demand
    • Deployment: Rolling updates and rollbacks
    • Self-healing: Restarting failed containers
    • Service discovery: Managing network communication between containers
    • Load balancing: Distributing traffic across containers

    Common Container Use Cases

    Containers are used in many scenarios:

    • Microservices: Each microservice runs in its own container
    • CI/CD: Building and testing applications in consistent environments
    • Development: Developers run the same environment locally as production
    • Testing: Running tests in isolated environments
    • Multi-tenancy: Isolating different applications or teams
    • Edge computing: Running applications on edge devices

    Container Security Considerations

    Containers introduce new security considerations:

    • Image vulnerabilities: Regularly scan images for known vulnerabilities
    • Least privilege: Run containers as non-root users
    • Network isolation: Use network policies to limit communication
    • Secrets management: Never hardcode secrets in images
    • Image scanning: Use tools like Trivy, Clair, or Snyk to scan images

    Summary

    Containers provide a lightweight, portable way to run applications. They share the host kernel but maintain isolation through namespaces and cgroups. Container images are built from Dockerfiles and stored in registries. For managing large numbers of containers, orchestration platforms like Kubernetes are essential.

    The next step is to try building and running some containers yourself. Start with a simple application, then explore more advanced topics like multi-stage builds, Docker Compose for multi-container applications, and Kubernetes for orchestration.

    Platforms like ServerlessBase simplify container deployment by handling the infrastructure details, allowing you to focus on your application code.

    Leave comment