Containers are best known for their role in simplifying application development, providing a disposable, reusable unit to modularize delivery, and bring consistency to virtually every development stage. They have demonstrated an ability to move DevOps forward by transforming the way development and infrastructure teams operate, and they have helped these teams move ever closer to continuous delivery. However, managing containers presents an entirely new challenge for most organizations. Containers, by their very nature, rely on shared resources. These may range from operating systems and application files to hosting resources including memory and CPU. When left unchecked, container use can lead to sprawl and may result in resource drain. With hooks into so many different areas, there is a strong incentive to know precisely what these containers are doing, what resources they are consuming, and how they are utilizing the network.
Container adoption is spreading fast. Docker, a leader in container technology, has documented the rapid growth in the use of its product. “Docker adopters approximately quintuple the average number of running containers they have in production between their first and tenth month of usage.” Additionally, based on a sample of 10,000 companies and 185 million containers in real-world use, Docker reports that the average user runs approximately seven bare metal containerized servers simultaneously on each host. This indicates that companies are finding tools like Docker as a lightweight way to share compute resources, not just for providing a versioned runtime environment. In fact, 25% of companies run an average of 14+ containers simultaneously, and 15% of hosts monitored are running Docker.
7 Benefits of Visibility Tools
Most tools incorporate a highly visible ‘dashboard’ approach, providing dynamic views into the status and performance of resources, in real-time. These tools come in many different shapes, sizes, and purposes, but most collect traditional metrics such as the utilization of CPU, memory, I/O, and network metrics. However, they also take things a step further by breaking down network traffic by image and container. Thus, in the case of a failure, engineers can immediately see which service is overloaded or causing other services to fail—and they can aggregate these service metrics across any number of hosts. Yes, visibility tools increase visibility with container use, but what other benefits exist? Below are some key insights and capabilities that visibility tools may provide:
- Monitoring and examining container data to enable IT operations analytics
- Overseeing container performance to ensure containers are available and issues are fixed quickly, with minimal effort
- Delivering insight on container resource usage, cluster capacity, and the impact of increasing cluster use for a specific service
- Discovering running containers, services, and nodes
- Adapting instantly to container scaling and updates
- Logging container connections, workloads, and violations
- Gaining better service context and accelerating root-cause analysis by indexing, searching, and correlating container-based data with data from the entire technology stack
With the right mix of visibility tools, you can best understand container behaviors, prevent resource drain, embrace the benefits of virtualization, all while maintaining a more secure environment.
When properly managed, containers can increase the speed and ease of application deployment, making it easier to port application stacks across different types of environments. The benefits of working with containers are many, and by transitioning to their use, IT can see value like increased continuous delivery, reduced operational costs, and consistency in code and application deployment. However, it’s important to be vigilant in monitoring container use to avoid sprawl and conflicts when accessing shared resources.
Next Steps: Learn more about containers by reading our tech brief, “A Checklist for Preparing for Containers.”