A majority of organizations are experiencing success with their microservices. In a survey of 1,502 software engineers, systems and technical architects, engineers and decision makers around the world, O’Reilly found that 77% of respondents had adopted microservices, with 92% of them saying that they were experiencing success. The survey found that owners of the software lifecycle were 18% more likely to succeed with microservices than those without. It’s therefore no surprise that 29% of respondents told O’Reilly that they were in the process of migrating or already implementing a majority of their systems using microservices.
But microservices do come with their fair share of security challenges. Alexei Balaganski, lead analyst at KuppingerCole, noted that one of the greatest obstacles is the fact that there’s not a single way to design or deploy a microservice. Such variability makes it difficult for organizations to safeguard their microservices in an organized way, and it expands the attack surface by increasing the number of ways in which malicious actors can use microservices to target organizations.
“While this approach makes it easier to develop, deploy, debug, maintain and operate microservices separately from all the other components of the application, it also means that several layers of complexity are introduced,” Balaganski told ComputerWeekly. “Then there is an API layer, which has its own security challenges, as well as messaging protocols used alongside or instead of APIs for communications between microservices.”
Practicing Security with OpenShift
Organizations need to securely configure their networks to account for the increased complexity and expanded attack surface introduced by microservices. One of the ways they can do this is by turning to Red Hat OpenShift. A platform managed by Red Hat, OpenShift comes supports the use of key security features through which organizations can secure their Kubernetes networks.
In particular, organizations can use OpenShift to secure their service load balancers as well as implement a series of Network Policies. Let’s examine these features below.
Secure Service Load Balancers
Load balancing is integral to the operation of Kubernetes. As noted in the platform’s documentation, Kubernetes is capable of distributing the container network traffic during periods of high activity. This helps to preserve the stability of the deployment.
StackRox explained how organizations can use OpenShift to secure these load balancers:
OpenShift, at a minimum, requires two load balancers: one to load balance the control plane (the control plane API endpoints) and one for the data plane (the application routers). If a load balancer is created using a cloud provider, the load balancer will be Internet-facing and may have no firewall restrictions. In most on-premises deployments, appliance-based load balancers… are used. Both types of load balancers will need to be configured by the administrator.
StackRox went on to clarify that load balancers should face the Internet but not be open to all IP addresses. To safeguard their load balancers, administrators can add the field loadBalancerSourceRanges to the specification of the service. Doing so will help to limit the IP address blocks that are allowed to connect to the load balancers.
Deploy Kubernetes Network Policies
Cluster administrators need to restrict traffic to pods within their clusters. By default, all pods within a cluster are accessible from other pods and network endpoints. This creates a security risk, as malicious actors could potentially seek to compromise one pod and use that attack to target other assets in the Kubernetes environment.
In acknowledgment of the threat, administrators can consider using OpenShift to define Network Policies. These objects work by selecting pods within a certain namespace and restricting communication to those pods based upon its specifications. Any pods that aren’t selected by a Network Policy will remain fully accessible.
As an example, administrators can create a Network Policy that matches all Pods but accepts no traffic. Such an object would effectively deny all traffic by blocking both ingress and egress attempts. Alternatively, they could choose a less strict Network Policy by creating one that allows connections from only the OpenShift Container Platform Ingress Controller, thereby helping to instill a sense of trust in pod communications.
Administrators could also go on to use Network Policies that only accept connections from pods within a specific project, only allow HTTP and HTTPS traffic based on pod labels and/or accept connections using both namespaces and pod selectors. They have the option of creating multiple Network Policies as these objects are additive. This gives administrators the option of using multiple Network Policies together to secure the Kubernetes network.
Beyond Network Security
Through securing the service load balancers and deploying Network Policies, administrators can use OpenShift to secure their Kubernetes. They can also use OpenShift to perform a number of other security functions such as enforcing authentication and authorization with regards to their Kubernetes network. For more information about OpenShift’s security features, check out the platform’s documentation here.
Author Bio
David Bisson is an information security writer and security junkie. He’s a contributing editor to IBM’s Security Intelligence, Tripwire’s The State of Security Blog, and a contributing writer to Bora. He also regularly produces written content for Zix and a number of other companies in the digital security space.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.