What is OpenShift?
Red Hat OpenShift is a cloud-based Kubernetes platform that helps developers build applications. It offers automated installation, upgrades, and life cycle management throughout the container stack — the operating system, Kubernetes and cluster services, and applications — on any cloud. OpenShift gives organizations the ability to build, deploy, and scale applications faster both on-premises and in the cloud. It also protects your development infrastructure at scale with enterprise-grade security.
Self-hosted Kubernetes installations or services — such as Amazon EKS, Azure Kubernetes Service, or the Google Kubernetes Engine — make it possible for enterprises to select and implement best-fit functions. But OpenShift provides comprehensive multi-tenancy features, advanced security and monitoring, integrated storage, and CI/CD pipeline management right out of the box.
The result? For organizations looking to transform and modernize, OpenShift allows you to scale so you can grow your business through cloud-native development. OpenShift and Kubernetes can simplify access to underlying infrastructure and help manage the application lifecycle and development workflows.
Read on for a deeper look into Red Hat’s Kubernetes platform, what it can do for you, and the benefits of OpenShift automation.
OpenShift key features and benefits
OpenShift 3.0 is built on the most popular Linux container technology, Docker. Since its inception in 2013, Docker has become the de facto standard for developers due to its ability to make more efficient use of system resources, ship software faster, and minimize security issues.
While still supporting Docker containers, OpenShift 4.0 defaults to the Container Runtime Interface — Open Container Initiative (CRI-OCI), which can handle a greater number of container nodes at scale.
OpenShift also leverages Kubernetes, which allows for the orchestration of many containers and scaling running in the cloud. Built on these powerful technologies, OpenShift offers key benefits for software development and container deployment:
- Run cloud-native microservices at scale. Microservices architecture combines loosely-coupled functions to create high-performing applications. The small size of these services makes them highly testable and easy to maintain while simultaneously increasing flexibility in how apps are structured and deployed. OpenShift makes it possible to create and deploy cloud-native microservices at scale, either to enhance existing application performance or build entirely new apps from the ground up.
- Scale and manage infrastructure. The flexible, cloud-based nature of OpenShift gives companies complete control over the scale and management of their application infrastructure. By removing concerns around storage, security, and lifecycle management, businesses can instead focus on application development, support, and evolution.
- Access a wide ecosystem of partners with open-source and cloud-native technologies. OpenShift also contains a set of developer tools in the form of command-line utilities and IDE support that allow developers to write and deploy to production with increased velocity. Additionally, OpenShift includes a set of pre-created, easy-to-use templates that offer a simple click interface and can be easily customized to meet your needs. It also works with popular continuous integration tools such as Jenkins.
OpenShift, DevOps, and CI/CD
The emergence of DevOps and continuous integration (CI) and continuous development (CD) pipelines has fundamentally changed the nature of software design, development, and deployment. By continually evaluating applications across multiple operational or development vectors, organizations can avoid common pitfalls and reduce project timelines — research shows that high-performing DevOps teams have 46 times more frequent code deployments and 440 times faster lead time from commitment to deployment.
OpenShift empowers easy integration with leading CI/CD platforms, AI-powered performance monitoring solutions, and user-demand analysis tools. By taking better control of their Kubernetes environments, OpenShift can help organizations reach their full DevOps potential.
How OpenShift enhances Kubernetes
Kubernetes allows for the scheduling of containers on worker nodes and provides container orchestration, load balancing, distribution, and scalability. OpenShift takes these concepts and enhances them.
And while security vulnerabilities can pose a critical threat to your business, OpenShift allows updates and security patches to be deployed to an entire cluster with a single click. Because it’s based on RHEL CoreOS, OpenShift can also update the underlying operating system the nodes are running on.
Along with security and developer convenience features, OpenShift and Kubernetes integration takes scalability to a further level. As it manages both the underlying operating system and the Kubernetes plane, the entire cloud infrastructure can be scaled up or down based on your needs. This fine-grained control allows OpenShift 4 to scale from 10 to 10,000 containers.
The latest version of OpenShift also makes for a smarter Kubernetes platform by streamlining the installation and upgrade experience. From start to finish, developers can now be up and running on a Kubernetes cluster in 15 minutes — including full tooling of metering and monitoring.
Benefits of using OpenShift to manage Kubernetes
OpenShift provides a variety of benefits over managing Kubernetes on your own infrastructure. Though Kubernetes offers many tools to help manage workloads, it can be notoriously difficult to manage at times due to its complexity. OpenShift eases the burden of Kubernetes several ways:
- Simplifies Kubernetes management tools. OpenShift incorporates all the tools necessary to manage the underlying nodes and control plane.
- Enhances DevOps productivity. By making it easier and faster to set up and manage Kubernetes, OpenShift enhances developer productivity by simplifying CI/CD processes.
- Speeds up development. Streamlining and consolidating Kubernetes setup and management reduces the time to working microservices. This in turn reduces the burden and overhead on the DevOps team and allows development teams—and users–to utilize distributed microservices faster.
Key metrics for OpenShift monitoring
Monitoring is an important part of maintaining any Kubernetes environment. Here are some key metrics for monitoring an OpenShift cluster to understand how it’s performing:
- Number and state of pods and nodes. For a given setup of objects in Kubernetes, it’s important to understand their state and health at any given time. These metrics can tell you the condition of the underlying nodes and the number of pods running in each node. Each one needs to have monitoring and alerting in production. Specific metrics to measure include current pods, available pods, unavailable pods, and current node conditions. It’s also important to monitor for hotspots in your application — that is, nodes that are over-burdened on resources while others are sitting idle.
- Resource usage. Resource metrics give an insight into the memory, compute, and disk utilization of the underlying application. These metrics help you understand the provisioning of the cluster as well as the load on the total application. Key metrics here include memory requests, CPU requests, disk utilization, and network throughput.
- Control plane performance. The Kubernetes control plane metrics give insight into the performance of Kubernetes itself. These metrics tell you how many times resources have been requested, the total amount of time spent waiting for a queue, or the total number of attempts to schedule a pod. Common control plane metrics include the total count of API requests, the sum of request durations, and the total amount of time spent processing specific items in work queues.
Teams often overlook alerting on these metrics, focusing only on application metrics, but monitoring Kubernetes performance will give you a holistic view into your OpenShift cluster.
OpenShift Best Practices
Follow these best practices to ensure that OpenShift is used efficiently and effectively and that your environment stays secure and reliable.
- Use projects and namespaces.
Projects and namespaces are a way to organize your OpenShift resources. Projects are the top-level organizational unit, and namespaces are a way to subdivide projects further. They can help keep your OpenShift environment organized and manageable.
- Limit container runtime privileges.
Containers should only have the privileges that they need to run. This can help to improve security and prevent containers from gaining unauthorized access to resources.
- Monitor and analyze audit logs.
Audit logs can provide valuable insights into what is happening in your OpenShift environment. Monitor and analyze audit logs to ensure your applications run as expected and identify potential security issues.
- Secure etcd.
Etcd is a key-value store that stores important data about your OpenShift cluster. It is essential to secure etcd to prevent unauthorized access to this data.
- Use the OpenShift Container Security Operator.
The OpenShift Container Security Operator can help you improve your OpenShift environment’s security. It can automatically apply security policies to your containers and help you to identify and remediate security vulnerabilities.
Additional best practices:
- Use the latest version of OpenShift.
- Use a separate build image and runtime image.
- Stick to the restricted security context constraint where possible.
- Protect the communication between application components using TLS.
- Use a load balancer to distribute traffic to your applications.
- Back up your OpenShift environment regularly.
Use Cases for OpenShift
Let’s examine some of the most convincing use cases for OpenShift and explore how it can transform businesses’ operations.
- Deploying and managing cloud-native applications: OpenShift is a native Kubernetes platform, making it ideal for deploying and managing cloud-native applications. Cloud-native applications are built using microservices architecture, which makes them highly scalable and resilient.
- Modernizing legacy applications: OpenShift can also be used to modernize legacy applications. By containerizing legacy applications, they can be made more portable and scalable.
- Running mission-critical applications: OpenShift can be used to run mission-critical applications. The platform’s built-in security features and high availability capabilities make it ideal for running applications that can’t afford to go down.
- Building and deploying data-intensive applications: The platform’s support for various data storage options makes it easy to build and deploy data-intensive applications.
- Running edge computing applications: Edge computing applications are deployed at the edge of the network, closer to the end users. This can improve performance and reduce latency for applications requiring real-time data access.
Additional use cases:
- DevOps: Improve the DevOps process by providing a centralized platform for building, deploying, and managing applications.
- Testing: Create a staging environment for testing applications before they’re deployed to production.
- Training: Train developers to build and deploy containerized applications.
By using OpenShift, organizations can improve the speed, agility, and security of their application delivery process.
OpenShift automation
OpenShift is a great platform to use for building and shipping cloud-native applications. However, with dynamic containers running in microservices, increasingly frequent code pushes, and several levels of abstraction distancing you from the cloud infrastructure, how can you know how your applications are performing at any given time? The answer lies in automation.
Automation has become an essential part of many organizations’ overall digital transformation strategies for cloud-native environments. But why is automation so important, and how is it relevant to OpenShift? To understand that we first need to take a deeper look into containers.
Containers can be challenging to monitor because they’re always changing. As organizations shift from monolithic infrastructures to microservices, the number of containers supporting these microservices can explode over time to hundreds and thousands of nodes.
With these dynamic environments, cluster performance and container health are only part of the puzzle. Knowing the health of your container, while important, doesn’t give you the complete picture, especially for self-managed OpenShift Container Platform (OCP) operators. You also need to know the health of the underlying control plane, the services running in the containers, and their overall impact on application health.
Manually installing different agent types or collecting and correlating metrics is simply ineffective. That’s why automation is critical — it solves the challenge of making sense of dependencies within the context of the entire technology stack to understand the impact on users and prevent business-impacting issues.
Of course, not all automation is created equal. For example, you may get a dashboard view that falls short in dynamic environments. But automated monitoring of cloud-native workloads, microservices, and containers can help you understand all relationships and dependencies across the cloud stack.
How Dynatrace makes monitoring OpenShift easy
The Dynatrace Software Intelligence Platform provides answers powered by the causal AI engine, Davis® and scales to the largest and most complex environments. Dynatrace can monitor and orchestrate applications, clusters, and underlying cloud infrastructure in OpenShift. Dynatrace automates your entire hybrid multi-cloud ecosystem and provides a full-stack overview of your cloud environment, from the underlying infrastructure up to the end user.
One of the core value propositions of the Dynatrace and OpenShift technologies is accelerating digital transformation by empowering your DevOps team. With Dynatrace’s automatic, zero-configuration monitoring, alerting, and observability solution, your DevOps team can take advantage of these tools with zero code overhead. There’s no need to change any application code, as Dynatrace automatically allows you to deliver more precise, AI-powered answers to dramatically simplify Kubernetes roll-out and management.
In addition, Dynatrace AI continuously maintains a complete real-time topology and dependency map of your environment—Smartscape—so you can always see the health of your applications, the health of your container processes, and the health of your code running inside — all in context to each other. Dynatrace has simplified this process through the OneAgent Operator, which uses Kubernetes-native means to roll out OneAgent to OpenShift nodes. Once deployed, Dynatrace OneAgent automatically injects the OneAgent into applications containers with automated visibility into CRI-O containers and traces.
What is OpenShift? And how can you leverage AI and automation to make it easy? To learn more about Dynatrace and Red Hat OpenShift, check out this case study from Porsche Informatik and this blog from Red Hat, The Power of OpenShift, The Visibility of Dynatrace.
Looking for answers?
Start a new discussion or ask for help in our Q&A forum.
Go to forum