how-to guide automating security controls in … · operations and resilience. while the benefits...
TRANSCRIPT
HOW-TO GUIDE
AUTOMATING SECURITY CONTROLS IN KUBERNETES ENVIRONMENTS
Security Blueprint
- 2 -
Cloud-based container architectures significantly
improve scalability, deployment frequency,
operations and resilience. While the benefits are
compelling, the new architecture must also support
compliance and security control requirements.
Kubernetes and Docker solutions provide the
foundation for orchestrating those container-based
microservice deployments and delivering enterprise-
grade applications. Enterprise-scale Kubernetes
orchestration consists of a multi-layered service stack
that supports diverse, dynamic workloads, requiring
sophisticated and automated controls usually not
found in datacenters. The CloudPassage Halo
Platform (Halo) further instruments Kubernetes to
provide automation with the comprehensive visibility
you need to achieve your security control objectives,
regardless of the infrastructure or deployment
environment.
This document provides an overview and tour of the
integration process between Halo and a Kubernetes
environment. It explains how Halo provides holistic
visibility across the Kubernetes stack, and how to
automate security controls that map to your preferred
industry and internal frameworks. The platform
supports REST APIs and out-of-the-box connections
for popular third-party tools, used to enhance your
development experience through implementing
specific environments and workflows.
Synopsis
SYNOPSIS
- 3 -
ContentsIntroduction to Securing Kubernetes 4
The Role of Kubernetes 4
Kubernetes Cluster Components 5
CloudPassage Halo Overview 7
CloudPassage Halo Secures Each Layer of the Kubernetes Stack 7
Halo Enables Your Organization to Shift Left 8
Halo Implements Control Policies for Best-Practice Compliance and Security 8
Deploying Halo for Kubernetes 10
Prerequisites 10
Step 1: Monitoring IaaS and PaaS Services 11
Configuration 12
Implementation Example 12
Example Output 14
Step 2: Assessing Container Image Integrity 15
Configuration 16
Implementation Example 16
Example Output 18
Step 3: Protecting the Host environment and running containers 20
Configuration 21
Implementation Example 23
Example Output 25
Conclusion 27
- 4 -
Kubernetes OverviewKubernetes (K8s) is the de facto container
orchestration platform for automating cloud
application deployment, scaling, and operation. The
platform is an open-source project that is maintained
by the Cloud Native Computing Foundation and
supported by numerous commercial software
vendors. If Kubernetes is the “conductor” of the
orchestra, the containerized workloads are the
musicians that together perform the symphony
(application). Container runtime environments, like
Docker, define the container package and enable
multiple workloads to run on a host node. Cloud
applications consist of containerized workloads, each
immutable, self-sufficient, and packaged with all of
their required dependencies. Kubernetes automates
container deployment and replication across a
potentially large and geographically dispersed
cluster of nodes to meet application service-level
requirements, including scale and availability.
Containers pair well with service-based architectures
and have many attractive attributes for enterprise
applications. They provide isolation and portability,
similar to virtual machines (VMs), but without
replicating the expensive overhead of a virtual server
platform and operating system. Applications consist
of more, but usually simpler workloads that can spin
up or down based on the capacity requirement.
Container images should only include unique libraries
and configuration dependencies for the workload
function, which shrinks workload size, simplifies
testing and reduces security risks. Best practices also
recommend deploying immutable workloads that
don’t change during execution. All patches, bug fixes,
and features are integrated, tested, and deployed as a
complete image to eliminate configuration drift and
unintended downtime when deployed to production.
The Role of KubernetesWhile having optimized and isolated workloads
increases the utilization of the underlying hardware, it
is not enough to run a complete application at scale.
An orchestrator is needed to automate the lifecycle
of containers, maintain operations, and ensure
application service level performance. Kubernetes
manages and automates applications at scale by
deploying workloads on a cluster of nodes that may
reside in one or more clouds. A node is a server
(virtual or physical) that provides the resources to run
multiple containers.
INTRODUCTION TO SECURING KUBERNETES
Introduction to Securing Kubernetes
- 5 -
Kubernetes Cluster ComponentsA Kubernetes cluster consists of a Master node and
set of Worker nodes. For high-availability, the Master
may be replicated to ensure minimal downtime. The
Master handles the API Server, Controller Manager,
Scheduler, Cloud Controller Manager and maintains
the state of the cluster in the key-value store (etcd). A
K8s cluster supports as many as 5000 worker nodes
that provide the computing resources needed to
run applications and services. Each Worker runs the
kubelet and kube-proxy to report back their status
to the Master node and handle deployment and
networking.
Container infrastructures utilize registries and
repositories to store and maintain images for
execution on the Kubernetes cluster. Usually, images
are validated and assessed on a test cluster before
their introduction into production. Best practices
recommend that all patches and upgrades are tested
as part of the complete workload (i.e. configuration,
libraries, applications) before introduction to
production. Since images don’t change in production,
configuration and application drift should not
occur. This model can increase security and reduce
application downtime.
Figure 1. Kubernetes orchestrates container instances running on a cluster of worker nodes.
etcd
Controller-manager
Scheduler
API server
WORKER NODESMASTER NODE
Kube-proxy
Kubelet
WORKER NODE
REGISTRY
POD
Container
POD
Container
INTRODUCTION TO SECURING KUBERNETES
- 6 -
Each Kubernetes Stack Layer Needs SecuringCloud computing blurs the security perimeter
because services are consumed differently.
Container workloads now execute on top of a stack
of virtual service layers (see figure 5), where each
layer presents potential compliance, security, and
availability risks. From a security standpoint, one
compromised component puts all layers at risk. In the
well-publicized Tesla breach, its AWS environment
was compromised through a Kubernetes node. When
consuming managed services, security is still needed
and remains a shared responsibility. Figure 2. Each layer of the Kubernetes Stack requires security controls
Container Instances
Kubernetes
Container Run�me
Host OS
Host System
Image Repository
IaaS Account
KUBERNETES STACK
Workloads on k8s Worker Nodes
Master and Worker Nodes
Executes Containers; ie. Docker
Host OS for Run�me and Kubernetes
Virtual or Bare Metal Machine
Container Images At-Rest
AWS, Azure, GCP
INTRODUCTION TO SECURING KUBERNETES
KUBERNETES STACK SECURITY CONTROLS
CIS CONTROL CATEGORIES
PCI CONTROL CATEGORIES
Asset Inventory
Change Management
System Hardening
Vulnerability Management
Iden ty & Access Mgmt. (IAM)
Detec on/Defense
Data Protec on
1, 2
2, 5, 6, 9
4, 5, 6, 9, 11, 14
3, 8
4, 9, 13, 16
8, 11, 12
13
5, 8, 10
5, 6
2, 4, 5, 6
5, 6
7, 8
1, 5, 6, 11
3, 4, 7, 8
Monitoring/Logging 6, 16 10
Figure 3. Security controls mapped to related CIS and PCI control categories
Kubernetes Stack Security ControlsA Kubernetes infrastructure can not operate in a
vacuum; and instead, it must operate within your
security controls framework. Your organization,
undoubtedly, has defined security controls for your
existing computing environments. These security
controls must extend to each layer of the Kubernetes
stack. Numerous industry frameworks and benchmarks
(e.g. CSA, CIS, PCI, AICPA) provide guidelines for
security controls. These specifications define controls
that fulfill higher-level security and compliance goals
that usually fall into one or more of the following
categories in the table below.
The additional layers and the highly dynamic nature of
the workloads make implementing security controls
for a Kubernetes stack challenging. Automation is
required to implement security controls efficiently and
effectively, which is critical in these next generation
container infrastructures.
- 7 -
CloudPassage Halo Secures Each Layer of the Kubernetes StackThe CloudPassage Halo Platform (Halo) instruments
Kubernetes infrastructures to implement your
compliance, availability, and security controls. This
solution is battle-tested by some of the most cutting-
edge and extensive cloud deployments in the world.
Providing insight and protection for Kubernetes
infrastructures requires addressing the underlying
stack of virtualized services; including, cloud service
accounts, container registries, container runtimes,
Kubernetes nodes, and the containers themselves.
Halo offers a scalable unified platform to secure
each of these layers using agentless and agent based
sensors to collect data and analyze it. Pre-configured
and customized policies monitor and enforce your
security control objectives.
CLOUDPASSAGE HALO OVERVIEW
HALO PLATFORM
Asset Inventory
Change Management
System Hardening
Vulnerability Management
Iden�ty & Access Mgmt. (IAM)
Detec�on/Defense
Data Protec�on
Monitoring/Logging
KUBERNETESSTACK
CONTROL OBJECTIVES
MANAGEMENT PLATFORMS
INTERNALGROUPS
AICPA, CIS, CSA, PCI, NISTBest Prac�ces
SIEM
Compliance
SecOps
DevOps
Halo UI & API
Analy�cs
CD/CI Tools
Info Feeds
POLICIES
Container Instances
Kubernetes
Container Run�me
Host OS
Host System
ImageRepository
IaaS Account
Figure 4. Halo implements security control objectives throughout the kubernetes stack
CloudPassage Halo Overview
- 8 -
Halo Enables Your Organization to Shift LeftCloud container architectures fundamentally differ
from traditional data centers and require new
strategies for security, operations, and development.
Halo enables your organization to shift left to address
security and operations issues before deployment
and throughout the software development lifecycle.
This means not only addressing the Kubernetes
stack, but also enhancing continuous integration
and continuous deployment (CI/CD) by integrating
with the DevOps pipeline (e.g. GitHub repositories,
Container registries, Jenkins CI/CD tools). The
key is to deliver risk intelligence to system owners
automatically, and in the context of their perspective
based on the workflow.
Halo facilitates a smooth operational migration by
integrating into existing tools and platforms; such
as, cloud service providers, SIEM, CI/CD pipeline,
ticket systems and analytics solutions. Halo provides
a secure foundation for Kubernetes to orchestrate
containers.
Halo Implements Control Policies for Best-Practice Compliance and SecurityHalo integrates and scales with Kubernetes
infrastructures to provide multi-layer visibility to the
orchestration stack and to implement your control
policies for best practices, compliance and security.
Agentless and agent based sensors gather information
across the Kubernetes stack to appropriately and
securely address each relevant layer.
Numerous industry organizations (e.g. PCI, CIS, CSA,
AICPA, CSA) have defined security control frameworks
for IT operations. Halo automates the implementation
of these controls using predefined policies, as well
as providing you the capability to customize policies
as needed for your environment. Each layer of the
stack has unique control attributes that drive the
appropriate Halo policies to automate your
desired controls.
The platform’s design provides a high availability and
low overhead suite of security controls that provide
comprehensive threat prevention for all types of
cloud infrastructure. It continues to evolve to address
up-to-date best practices and new cloud service
offerings, but its essential patented architectural
innovations remain the core of its design.
CLOUDPASSAGE HALO OVERVIEW
- 9 -
Figure 5. Halo gathers information across the kubernetes stack with agentless and agent based sensors.
Halo Platform
HOST CONTROLS CONTAINER CONTROLSInventory and assess images at rest, monitor container launches, detect vulnerable or unknown containers, collect container metadata
Vulnerability management, security and configuration monitoring, and drift detection for OS, container runtime, and k8s services
CSPM CONTROLSVulnerability assessment and management for AWS account and cloud assets
AWS ACCOUNT
Kubernetes Master Kubernetes Node
Host (VM, metal, cloud instance) Host (VM, metal, cloud instance)
Docker image registry (e.g. ECR)
Host operating system (e.g. Ubuntu 16)
Docker client /API Docker client /API
Halo agent Halo agent
CLOU
D SERV
ICE API
Registry REST API
Dockerd
Controller-manager
Scheduler
API server
Dockerd
C(n)
C4
C5
C1
C6
C2
C7
C3
Kubl
etHost operating system (e.g. Ubuntu 16)
Repository (n)
Repository 3
Repository 2
Repository 1
CLOUDPASSAGE HALO OVERVIEW
The full cloud hosted Kubernetes stack used in the
deployment examples below includes:
• Cloud Security Posture Management: assess
your IaaS accounts to inventory services, collect
configuration information, and check for security
and compliance policy violations.
• Registry Image Scanning: securely connect with
registries to inventory all images, instantiate and
assess the images for any vulnerabilities.
• Host Protection: an agent, either containerized
or installed as software, performs a range of
security functions with minimal overhead (2MB in
memory) by connecting with the Halo grid to get
security policy information and efficiently leverage
resources to perform assessments.
» Vulnerability management, security and
configuration monitoring, and drift detection
for host OS, container runtime, and K8s
services.
» Monitor container launches, detect vulnerable
or unknown containers, and collect container
metadata for further analysis.
- 10 -
Deploying Halo for Kubernetes
PrerequisitesHalo supports various container orchestrators, cloud infrastructures, and third-party platforms. The examples in
this document assumes the following architecture:
• AWS - EC2-based services
• Linux Hosts
• Docker container runtime
• Kubernetes Cluster
If you haven’t installed Docker or Kubernetes, phoenixNAP provides a good tutorial for getting started and
installing on Ubuntu 18.04.
DEPLOYING HALO FOR KUBERNETES – PREREQUISITES
- 11 -
Step 1: Monitoring IaaS and PaaS ServicesHalo monitors your IaaS accounts (e.g., AWS, Azure)
to automate security controls for hosts, registry
services, IAM and any other resource that supports
your containerized environment (e.g. EKS and ECS). It
collects information using the service provider’s API to
create an inventory of cloud services with their status
and configuration. Halo also identifies and evaluates
other IaaS services (e.g. storage) that may not be
directly part of the Kubernetes stack. In this example,
we’ll learn how to implement security controls for
IaaS services corresponding to the cloud production
environment(s).
Halo’s continuous assessment process captures an
inventory of systems and assets that might impact
security and/or compliance. Halo evaluates the
account and assets against configured policies for
best practices, compliance and security. The results
are made available through the Halo GUI, directly
accessible via the REST API, or via integrations which
push data to other systems (e.g. Jira, Sumo Logic,
Splunk, etc).
Container Instances
Kubernetes
Container Run�me
Host OS
Host System
Image Repository
IaaS Account
KUBERNETES STACK
STEP 1: MONITORING IAAS AND PAAS SERVICES
- 12 -
ConfigurationConfiguring Halo to assess your AWS or Azure environment is done in two steps, after which it will automatically
assess your cloud infrastructure on an ongoing basis including any resources (EC2, ECS, EKS, etc) used to run
your cloud-based infrastructure.
1. Set up identity and access for Halo in the cloud account (appropriate permissions are required to create/
manage access).
2. Provide role/authentication information to be used by Halo.
That’s it! Once the account is set up, Halo automatically assesses it against out of the box security and best
practice policies. A full inventory of assets is available through the Halo portal and API, including the status of
each asset and details on how to address violations.
Most customers use the default policies, which are automatically applied and include all of the CIS benchmarks
and best practices. If desired, you can customize these policies and apply different versions to different
accounts, as needed.
STEP 1: MONITORING IAAS AND PAAS SERVICES
Implementation Example1. In the Halo Portal, go to Site Administration
Screenshot 1. Halo Portal in AWS
- 13 -
2. In Site Administration, select the Integrations tab, click Actions and then Add Account
Screenshot 2. Site Administration
STEP 1: MONITORING IAAS AND PAAS SERVICES
3. Follow the detailed instructions to set up a Policy and Role, and provide the ARN to Halo
Screenshot 3. Instructions to set up a policy and role
- 14 -
4. Click Add AWS Account and your first assessment will begin immediately. Note that Azure setup follows a
similar process, with detailed instructions in the UI.
5. Get a cup of coffee; the first assessment is usually done in 15 minutes or less, depending on the size of your
account. Then return to the Overview screen to get a sense of you inventory and overall security posture.
STEP 1: MONITORING IAAS AND PAAS SERVICES
Screenshot 4. Summary supports filters and drill-down to policy issues
Example OutputOnce Halo connects to your IaaS account, you can access the policy evaluation results via the GUI or API. In the
example below, Halo has inventoried the various assets deployed by an AWS account. The summary screen in
Screenshot 4 shows the overall policy adherence of all assets by severity and issue. The user can filter data and
drill down into the details to formulate a strategy for addressing the most critical issues.
Screenshot 5 drills down to the critical policy issues relating to users. In this case, we have issues with key rotation,
password expiration, and MFA. Of particular interest is the lack of hardware MFA for the root account.
Screenshot 5. Details show specific issues, e.g., lack of MFA for important accounts.
- 15 -
Step 2: Assessing Container Image IntegrityHalo uses either a software or containerized
connector to securely access and inventory registries
and repositories and assess container images at
rest for vulnerabilities. Best practices recommend
continuously scanning images against the latest
policies and vulnerabilities. These measures enable
you to catch violations in both currently and soon-to-
be deployed workloads. Images that fail can then be
fixed and submitted for re-assessment.
Halo integrates with industry-leading, third-party
trouble ticket, CI/CD pipeline, and event management
solutions to automate processes and support
established workflows. These include solutions
such as Jenkins, Splunk, Sumo, Service Now, and
Jira. Information related to these integrations is
not covered in this document but is available in
documentation and application notes.
Container Instances
Kubernetes
Container Run�me
Host OS
Host System
Image Repository
IaaS Account
KUBERNETES STACK
STEP 2: ASSESSING CONTAINER IMAGE INTEGRITY
- 16 -
ConfigurationHalo provides a registry connector that runs on a VM or container with network access to one or more
registries. Additional connectors are needed when the first connector does not have access to all registries, or
the load is sufficient to require horizontal scaling. To set up container image assessment:
1. Deploy a registry connector in your environment
2. Provide information on each registry to Halo including location, type, and authentication information.
Halo will test the connection to the registry, and once that it is established it will automatically initiate ongoing
inventory and assessment of all repositories and the images they contain.
The example below uses AWS ECR (Elastic Container Registry). A variety of registries and authentication types
are supported and the Halo documentation, accessible through the portal, provides detail on how to use them.
Note that if you are running a self-hosted registry, it is important to also install a Halo agent on the registry server
itself to assess it and protect it from compromise.
STEP 2: ASSESSING CONTAINER IMAGE INTEGRITY
Implementation ExampleTo install the registry connector:
Screenshot 6. Installing Registry Connector
- 17 -
1. In the Halo Site Administration portal screen, click “Integrations” > ”Registry Connectors.”
2. From the Actions menu, click ”Install Connector.”
3. Choose an identifiable name, the distribution type, and (if needed) the host server OS.
4. Copy the provided script; you can modify this as desired (i.e. to use an internal repository) and use it as the
basis for automating deployment.
5. Run the script on the host server to install and start the connector.
Once a registry connector is established, use the following process to add a registry for evaluation. Halo
automatically creates an inventory of all repositories and conducts assessments of the images.
1. On the Assets screen, click “Containers” > ”Registries.”
2. From the Actions menu, click ”New Registry.”
3. Enter the registry information including name, url, type, authentication information (the example below uses
AssumeRole and an ARN), and the connector to use to reach the registry.
4. Click ”Add Registry.”
Screenshot 7. Adding a registry for evaluation
STEP 2: ASSESSING CONTAINER IMAGE INTEGRITY
- 18 -
Once configured, Halo automatically creates an inventory and assesses the images in the registry’s repositories.
The results are available in the Halo GUI and API, and are updated with periodic scans to detect changes in
images and apply new vulnerability information.
Example OutputThe Halo presents dashboard results via Web GUI and REST API. The screenshots below show sample status for
images in registries.
Screenshot 8 shows a summary dashboard with a count of registries, repositories, and images as well as
information on running containers and images in use - which will be available once the Kubernetes cluster is
also instrumented.
Screenshot 8. Summary of inventoried images and running containers
STEP 2: ASSESSING CONTAINER IMAGE INTEGRITY
- 19 -
Screenshot 9 shows a listing of all container images available in multiple registries and repositories, along with
their security profile. The GUI supports drill-down to more detailed information on specific containers and
issues, and also provides a listing of all vulnerable packages and their frequency of use across different images
and running containers.
Screenshot 10 shows a list of issues (policy violations) for a particular container image. You can drill down to
details (e.g., CVE) on each issue. Alternatively, you could drill down on a specific issue to see a list of affected
container images.
STEP 2: ASSESSING CONTAINER IMAGE INTEGRITY
Screenshot 10. Drill down to see specific issues with each image
Screenshot 9. Summary of images integrity across multiple registries
- 20 -
Step 3: Protecting the Host environment and running containersHalo instruments servers using agents installed on
the host systems to implement a variety of security
controls including discovery/inventory, vulnerability
management, system hardening, system integrity
monitoring, drift detection, runtime security events,
and audit data collection. It collects detailed
configuration and status information about the (a)
host operating system, (b) container runtime, (c)
Kubernetes services, and (d) container workloads. The
collected information is evaluated against policies for
security, best practices, and compliance to detect
deviations. The results are available via the Halo GUI
or REST API.
Halo also detects container launches, collecting
metadata and correlating the information on
container workloads deployed in production clusters
with images in the repositories collected by image
integrity assessments (see step 2 above). If a match
is not found, the workload is flagged as “rogue” (i.e.
unknown container image).
Container Instances
Kubernetes
Container Run�me
Host OS
Host System
Image Repository
IaaS Account
KUBERNETES STACK
STEP 3: PROTECTING THE HOST ENVIRONMENT AND RUNNING CONTAINERS
- 21 -
ConfigurationStep 3a: Instrumenting the Kubernetes Cluster
The Halo agent can be deployed as software on the host, via automation or by inclusion in the golden image,
or as a container. Functionality differs between software-based and sidecar-container Halo microagent. This
is due to the inherent differences in software installed on a container host and software running in an isolated
guest container. Please refer to Halo documentation for details. On the master node, as it does not usually host
containers, the agent will need to be installed as software. In this example the containerized agent is deployed
on the worker nodes as a DaemonSet. This leverages kubernetes functionality to ensure the Halo agent runs on
every node in the cluster to assess it for security and audit the containers it is running.
Once you have deployed the Halo agent, each host will automatically be evaluated for vulnerabilities and
against assigned security policies. You will assign some essential policies to secure hosts below in Step 3b.
Additionally, Halo will monitor all container launches, collecting metadata and correlating them with the image
assessments you configured above in Step 2.
To deploy the agent as software, right-click on the Halo group where this Kubernetes deployment will be
tracked and select “Add Server”. Select the appropriate target OS and copy the script provided. Note that this
script includes an agent key which will identify the agent as belonging to your deployment and indicate what
group it should be organized under. Run this script on the target host - for this exercise you can do it manually,
but typically this process is automated as part of the build or deployment process.
STEP 3: PROTECTING THE HOST ENVIRONMENT AND RUNNING CONTAINERS
Screenshot 11. Installing agents
- 22 -
To deploy the agent as a daemon-set on all worker nodes, copy the manifest from github and update the agent
key with one retrieved from the Halo portal or API. To retrieve this key, right-click on the group where this
Kubernetes deployment will be tracked as above and select “Add Server”. Simply copy the agent key from the
script provided and update the key in the manifest before running the following command on the kubernetes
master
$ kubectl apply -f halo-k8s-daemonset.yaml
You can verify that the agent is running with the “kubectl get pods” command, or look in the Halo Portal
or API for the servers hosting the nodes. Halo will now discover all running containers on this Kubernetes
infrastructure, and each new or existing node is evaluated and monitored at the host level.
STEP 3: PROTECTING THE HOST ENVIRONMENT AND RUNNING CONTAINERS
Step 3b: Implementing Security Policies
CloudPassage provides configurable policies that implement CIS benchmarks for Kubernetes and other
components. These policies include the Kubernetes master and worker nodes, as well as Docker services and
the operating system. To run these policies on your Kubernetes deployment, you must assign them to run
against your Kubernetes hosts via the Halo portal or API.
We recommend that you begin with the out-of-the-box policies to protect your OS distribution, Docker services,
and your Kubernetes master and worker nodes. As your deployment and security program matures, you may
wish to customize these policies as well as add additional controls provided by Halo. Your initial implementation
should include at a minimum the following policies (links to templates require Halo log in):
1. CIS Benchmark for Kubernetes MasterNode
2. CIS Benchmark for Kubernetes WorkerNode
3. Kubernetes Master and Worker node (File Integrity Monitoring policy for drift detection)
4. CIS Benchmark for Docker
5. CIS Benchmark for [your variety of Linux]
The example that follows goes through implementing the Kubernetes MasterNode policy.
- 23 -
Implementation Example1. Go to Policies/Templates and search for Policy Description ~ “kubernetes master” or simply click the
appropriate policy link above while logged into the Halo portal. Clone the template into your working policies
Screenshot 12. Finding a policy template
STEP 3: PROTECTING THE HOST ENVIRONMENT AND RUNNING CONTAINERS
2. This will bring up the details of the policy; you can review the rules it contains or just skip to the next step
Screenshot 13. Policy rules
- 24 -
3. Go to Overview and right-click on the Halo group that covers your Kubernetes deployment. Select “Apply
Policy” to enter the Policy tab of the Group Settings.
Screenshot 14. Applying policies
STEP 3: PROTECTING THE HOST ENVIRONMENT AND RUNNING CONTAINERS
4. Add the Kubernetes MasterNode policy
5. Select “Inherit Down” to apply this policy to any sub-groups used to organize your deployment.
Screenshot 15. Applying policies to sub-groups
6. Remember to click the “Save” button in the upper right to save your changes.
Once configured, Halo automatically performs continuous assessments of the hosts in your Kubernetes
infrastructure. As you add or remove nodes, the DaemonSet ensures an agent is present to track the running
hosts and their containers to perform security evaluations. Because you used the agent key for this group, as
the cluster nodes scale these policies will automatically be applied to the new nodes.
- 25 -
Example OutputNow that all layers of the Kubernetes stack are instrumented, you can see not only the status of the images in
your repository but also the security posture of your hosts, including the docker and kubernetes installations, as
well as what containers are running on them and which images are actually being used in your environment.
Screenshots 16 and 17 below show running containers with their vulnerability status and other critical
information, as well as a listing of images that are currently in use by these containers. These data sets are
connected, and you can drill into the containers to get additional metadata including detail on the specific
image they are running, as well as into the image to see more detail and a listing of specific vulnerabilities.
STEP 3: PROTECTING THE HOST ENVIRONMENT AND RUNNING CONTAINERS
Screenshot 16. Listing of information on live container workloads.
Screenshot 17. Image inventory filtered to show images used by one or more running containers.
- 26 -
The scan results in Screenshot 18 are the output of a scan on a freshly installed default Kubernetes master node
installation. As you can see, a default Kubernetes installation needs a lot of work to be completely secure. Many
benchmark rules produce ‘fail’ results which implies that the configuration needs hardening.
Screenshot 18. Kubernetes master node security policy rule failures.
STEP 3: PROTECTING THE HOST ENVIRONMENT AND RUNNING CONTAINERS
ABOUT CLOUDPASSAGECloudPassage is the recognized leader in automated cloud security and compliance for dynamic application deployment
environments like AWS and Azure. A true pioneer, the company’s groundbreaking innovations received the first-ever patents
granted in the cloud security domain.
Today, CloudPassage safeguards cloud infrastructure for the world’s best-recognized brands in finance, e-commerce, gaming,
B2B SaaS, and digital media with Halo, its flagship solution. Halo is an award-winning cloud security platform that automates
continuous visibility for millions of serverless, server-based, and containerized assets across hundreds of public and hybrid cloud
environments. Halo is software-as-a-service, deploying in minutes and scaling effortlessly. Halo integrates with configuration
management and CICD tools such as Puppet, Chef, and Jenkins to align security functions with automated DevOps processes.
CloudPassage is a proven solution for delivering automated security and compliance visibility, critical to protecting data and
applications migrating to public IaaS environments.
Visit www.cloudpassage.com to learn how Halo can enable faster, more effective cloud infrastructure security for your enterprise.
www.cloudpassage.com | 800.215.7404
© 2020 CloudPassage. All rights reserved. CloudPassage® and Halo® are registered trademarks of CloudPassage, Inc. BP-KUBERNETES_03172020
ConclusionKubernetes clusters increase application scale,
performance, and operational efficiency, but present
a significant challenge for addressing established
control frameworks for compliance and security.
Container orchestration distributes applications
across clusters of nodes that may span geography
and rely on a stack of virtual layers that obscures
configuration and operational information. The Halo
platform instruments Kubernetes clusters to automate
critical controls to meet security, compliance, and
service level requirements.
The contents of this Blueprint document only scratch
the surface of Halo’s capabilities for container,
microservice, and hybrid cloud deployments. To
address your specific requirements, please contact the
CloudPassage team at [email protected].