Software engineers have a lot of complexity to work through every day, ranging from solution design to connectivity and operational issues. Much of my career as an architect has been focused on simplifying things for the developers I work with. As the saying goes, “Simple is sexy.”
The last few years have seen a growth in more complex and diverse technologies while increasing developer productivity. This has been amazing as I’ve found myself doing things I would never have imagined at the start of my career; however, it isn’t free. We all have to put in the effort to understand the basics.
Remembering my first job out of college, I spent untold hours creating, maintaining, and tuning the build infrastructure for the first project I was assigned. At the time, there were no better options. While some automation was possible via batch scripts and interfaces tested with mocks for external systems, each solution had significant limitations. Thankfully, as containers have become more prevalent, more limitations are being removed, and the gaps between development and production environments are shrinking.
Recently, I’ve been thrilled to automate and simplify our current development workflows down to single-click actions.
- One click to run the entire product locally on our desktops.
- One click to deploy the product to Test or Production stacks.
- One click to verify and monitor any stack, local or remote.
What follows will guide you through the journey we’ve taken to get here, including general recommendations. Parts 2 through 7 will provide practical steps to implement similar improvements for your organization.
- Part 2: Containers for Mundane Tasks
- Part 3: Docker Layers for Optimized Builds
- Part 4: Cloud Familiar Architecture with Docker Compose
- Part 5: GitHub Actions and Docker Containers
- Part 6: Scratching the Surface of Kubernetes
- Part 7: Augmenting Kubernetes
Container-related technologies, including Docker, containerd, and most importantly, Kubernetes, have advanced significantly over the past decade. We could view these advancements in a few different ways. Some will see this as, “Oh, we can run our applications in a container rather than waiting for a VM.” That’s a critical use case, but it doesn’t end there.
A couple of years ago, I needed to scan several hundred pages of paper documentation. Previously, I would have used a macOS system to do this because the Preview app could easily scan and generate a PDF file. I couldn’t do so in this particular situation and was limited to a Windows desktop. Thankfully, it had WSL (Windows Subsystem for Linux) and Docker Desktop. Within a few minutes, I was shelled into an Ubuntu container, installed poppler and libtiff, and ran tiff2pdf and pdfunite. That required no additional software installations or uploading to sketchy websites, and I could complete the effort in very little time.
We can apply the same guidance to software solutions as well. The current solution I’m working on utilizes several infrastructural elements, including load balancers, DNS, databases, etc. These elements can all be run within containers using Docker Compose locally.
- Need a load balancer? Use NGINX or HAProxy.
- Need a local DNS Server? Use BIND 9.
- Need a database? Spin up a container and even pre-populate it with sample data.
These are examples of how containers provide a wealth of opportunities to separate complexity into concise, manageable, and secure components that can be stitched together to create advanced solutions rapidly.
- Install Docker Desktop and use it constantly.
- Use containers for mundane tasks – the more you use them, the more advantages you’ll find.
But it Works on my Machine
A regular refrain I’ve heard (and said in years past) is the frustration of code working locally but not in test (or worse, production). Generally, I’ve seen this caused by many elements that Software Engineers (developers, testers, etc.) need to think of and maintain context regularly. These can include everything from operating systems to data warehouses and authentication to operational analyses.
By standardizing on a containerized development model across all environments (Development, Test, Production, etc.), we can significantly improve development with a foundation of consistency. Since the same Dockerfile is used for development and production, it’s built with the same software, protections, and configurations.
It is essential to remember that some configurations, software, and data should not be deployed in Production environments. Docker Image Layers are one option that can be useful to address these deployment considerations—for example, using a dev layer for local development with a more concise final layer for deployment.
- Use Docker Layers to build your container images incrementally.
- That will optimize your build time and create separate layers to satisfy security requirements and best practices.
Let’s say that you’ve developed and verified a new piece of your solution – how do you get it working with what your team is developing? The beauty of most modern CI/CD platforms is that they are generally based on containers themselves. Platforms like CircleCI, Drone, and even GitHub Actions all have native container support and can easily and rapidly build container images. In my current role, we have a GitHub Action that acts on each PR or Merge and, within 5 minutes, has:
- Executed and verified all unit tests
- Built all of the release containers
- Pushed the containers to a common repository
- Deployed the containers to a live Kubernetes system
- Verified the health of the live system
Looking back at the early days of my career, this is a stark improvement. We no longer need to wait hours for a build, days for someone to manually deploy the software, or more hours to view and analyze a system’s health. These actions have been automated down to seconds and minutes, with the data always available.
- Try out one or more of the common CI platforms.
- GitHub is a great place to start since you can utilize it for free – even GitHub Actions up to 2,000 minutes per month (as of August 2022) are at no cost.
- Learn Git and practice good version control practices
Docker Compose vs. Kubernetes
Kubernetes, as a container orchestration platform, is powerful and hugely successful. Thankfully, it is easy to mimic a good portion of its functionality, from an application’s perspective, without the heavy lifting. Docker Compose can run numerous container images, handle DNS queries for services, and environment configuration, mimic Kubernetes jobs, and most importantly, enable development within running containers. Some development environments are more feature-rich, but I’ve succeeded with multiple languages and technology stacks using containers, including GoLang, Python, Rust, and C# .NET. The beauty of this approach is that the same Dockerfile-based container can satisfy numerous use cases, including:
- Application Development
- Unit Testing & Code Coverage Analysis
- Integration, System, and Performance Testing
- Production Deployment
Some aspects are more challenging to implement in Kubernetes. As indicated above, we utilize infrastructure elements that need to run in containers instead of being provided by the platform (load balancers are a good example here). By approaching these elements from a logical versus physical perspective, I’ve found that we can easily implement a corresponding component in Docker Compose.
Using Docker Compose, it’s easy to utilize NGINX (or HAProxy) as a load balancer to route HTTP(2) requests to downstream servers. For example, when running live in AWS with Kubernetes (EKS), instead of running NGINX/HAProxy directly, it’s better to delegate that responsibility to the platform. That can be accomplished with the AWS Load Balancer Controller, which can be easily integrated into AWS EKS (Elastic Kubernetes Service). The AWS Load Balancers can provide numerous additional functionalities, from monitoring to WAF protections. For 99% of development activities, those elements can be deferred to system and performance testing efforts.
- Gain familiarity with the specific Kubernetes platform you’ll be using and understand what logical elements you can delegate to it or the tooling around it.
Kubernetes Platforms (EKS, GKE, AKS)
The platform and ecosystems that have grown with Kubernetes over the past few years can be daunting, especially given their numerous functions and capabilities. What I find helpful is to always focus on the basics and advance incrementally from there.
With that approach in mind – what is Kubernetes? At its heart, it’s just a tool for running containers.
- Yes, there are multiple mechanisms for controlling access to Kubernetes (i.e., RBAC).
- Yes, there are multiple solutions for monitoring Kubernetes (Prometheus, Grafana, etc.).
- Yes, there are multiple tools for deploying to Kubernetes (Helm, Terraform, etc.).
But at the end of the day – the most critical element is the container. Even if you use sidecars or init containers, they’re just containers.
When running solutions in Production, DevOps engineers will always need to get into the minute details and take advantage of the many features of Kubernetes, but getting started and utilizing them should always start with the basics.
- Approach Kubernetes through a phased deployment basis and spend sufficient time with each layer, for example:
- Configure and manage software Deployments and run Jobs
- Configure and manage networking with Services, Ingress controllers, and Load balancers
- Protect Kubernetes with Access Control
- Protect and manage the underlying Nodes
- Protect and manage Persistent Volumes
- Monitor each of the layers above
In Part 2, we’ll delve into multiple mundane uses of containers to show the benefits and ease of use in getting started.
Aembit is the Identity Platform that lets DevOps and Security manage, enforce, and audit access between federated workloads. To learn more or schedule a demo, visit our website.