The
previous post described the challenges a developer faces in securely deploying the applications and managing their steady-state security. The difficulties arise, every security and compliance capability needed by developers is not available as a service consumable through an API (e.g., invoking an API to start a penetration test). As a consequence, the developer has to determine
all the details of security and compliance for its application, which is no easy task.
Before we understand various security and compliance capabilities needed by developer, we must understand what is the life cycle of an application.
Life Cycle of an Application
An application life cycle comprises, development and build, deployment and testing, running and updates, and finally decommissioning. The following figure shows a typical application life cycle.
We also refer to this life cycle as the DevOpos pipeline. In an ideal DevOps pipeline, developers develop their applications, package their application components and dependencies using technologies such as
Docker to create "immutable code", deploy them using templates such as Docker Compose, run them in a cloud, deploy updates continuously, and may ultimately decommission an application, partly or entirely.
DevOps Tension with Security and Compliance
This "continuous development and deployment" cycle creates tensions with the conventional security approaches. For example, applications comprising multiple components (e.g., webserver and database server) need "keys" or "passwords" for communication, yet these "keys" and "passwords" often end up in continuous integration tools such as
Jenkins or
Travis often with limited or no access control, or worse in code. Moreover, the "immutable code" images may end up in public repositories. Worse, it may be a requirement to not have any security credentials as part of the "immutable image"; such credentials must only be available at run-time. Also, source code scan, network and malware scans may not be performed before every continuous update, leading to security holes.
Secure DevOps Pipeline
How can a system possibly alleviate the difficulties developers face in securely deploying applications and managing their steady-state security? The answer is to convert every possible security function, whether it is a capability or an advisor into a service that is easily consumed by a developer with little or no effort from the developer. If achieved, this is the ultimate "Secure DevOps Pipeline", where every security and compliance function is consumable through an API.
So what are the security and compliance capabilities needed in a Secure DevOps Pipeline? Here is a pictorial rendition of key capabilities needed in such a pipeline:
tl;dr
Secure DevOps - Develop and Build Phase
Application development typically starts with a problem statement, design, and initial prototypes. The prototypes and subsequent revisions targeted for production comprise code written by application developers. The code may also have dependencies such as libraries, dependent packages, and so on.
TRUSTED CODE capability provides an implies that all application code and its dependencies are from trusted repositories.
The source code written by application developer is stored in trusted and managed repositories. Such repositories may have been configured with some automatic code scans on every code commit.
When an application developer builds an immutable image (Docker image) for its application source code and dependencies, and may run package installation tools such as
apt-get install XYZ, the dependent packages may also get installed. If the version and source for those dependent packages is not controlled, old packages with potential vulnerabilities get installed or worse malware may creep in. The job of the "Trusted source" capability is to ensure that application dependencies are from known or trusted sources.
In image formats such as Docker images, a developer has complete control over which repositories to use. Without any restrictions, a developer may download code or dependencies from anywhere, leading to the first chink in the security posture: is the code even trusted? The job of the "Trusted Code" function is to apply restrictions on the source and version of application code as well as dependent packages.
Trusted code is easier said than done. As mentioned above, it involves restricting developers to a set of trusted repositories, not all of which may satisfy dependencies for an application. However, verifying code / package and dependencies and adding them to trusted repositories is a continuous process. Overtime, the curated repositories will contain packages that should satisfy developers needs.
CODE SCAN capability scans the source code written by developers to ensure that it complies with security best practices. This capability is especially needed if "Trusted Code" capability is not available.
Developers are lazy. They will typically take the shortest route to complete their task. Without adequately enforcement of code best practices, errors may creep in the code that lead to security holes, e.g., SQL injection.
CODE / PACKAGE VULNERABILITY determines vulnerabilities in dependent code used by an application. While source code written by developers may get appropriate scanning as part of "Code Scan" service, the vulnerabilities in dependent code / packages need to be identified through such a service.
An application may have numerous dependencies that are satisfied through numerous package management tools. Typically, such package management has been associated with utilities such as "apt-get install" or "yum install". However, modern applications make use of several package managers such as python pip or Node.js npm. Worse, package managers may not always exist, e.g., .jar packages available on
bintray. While operating system distributions such as
Red Hat or
Ubuntu regularly publish security bulletins for packages published in their repositories, the same cannot be said for every package manager. Consequently, it may not be possible to accurately determine vulnerabilities associated with each package.
Nevertheless, such a service is crucial in identifying known vulnerabilities.
APPLICATION CONFIGURATION VALIDATION is required to determine the correctness of application configuration.
Modern applications are complex. They comprise hundreds and sometimes thousands of configurations. Getting application configuration right from security perspective when it is deployed through layers of automation is extremely hard.
An application configuration validation is a service that validates the application or its component configuration from a security (or potentially performance) perspective and alerts the developer of incorrect configurations before an application is deployed into production.
MALWARE / VIRUS SCAN is a service that determines if any malware or virus has crept into the immutable image of an application build.
LICENSE VERIFICATION is a service that validates the license of application components being deployed.
License verification is needed for two reasons. First, the service ensures that only components with known licenses are deployed. For a capability delivered as a service, this is less of a concern since the software components having most restrictive software licenses such as GPL can potentially be used in delivering the service.
The other reason for license verification is appropriate charge back. For traditional enterprise software, it is important to determine the appropriate licenses for software being deployed. However, charge back for deployed software becomes less of a concern as traditional software is increasingly delivered as a service.
DEVELOPMENT / AUTOMATION CREDENTIAL MANAGEMENT is critical in ensuring good security practice. Building an immutable image for an application is often done through automation (
Jenkins and
Travis). Ensuring appropriate accesses for such credentials and key management is critical for good security hygiene.
Similar to user authentication, authorization, and access, any credentials required for automation must be kept in a credential or key store which is delivered as service.
Secure DevOps - Deploy and Test Phase
Application deployment and testing happens continuously and iteratively with development, and eventually running. Following key capabilities during deployment and testing, delivered through an API, can help reduce the "security" burden on a developer.
APPLICATION PATTERN allow developers to specify how various components of an application are combined together, typically over network, in delivering the application function. Such specification is then run by an deployment engine.
Cloud platforms provide numerous ways of specifying a template for application deployment. These templates vary in order of flexibility and ease of use. Typically, the templates that are highly flexible make it onerous for developers to specify correct security properties.
From a security perspective, the application deployment specifications need to follow "secure by default" principle while providing flexibility to the developers to override any details.
APPLICATION CREDENTIALS AND KEYS need to be managed similar to user credentials or deployment tools credentials.
Applications comprise multiple components such as web server and a database server (
see this example) or even remote services delivered through APIs. Credentials are required to communicate among components or remote services. Such credentials, referred to as application credentials and keys, must also be stored in key-management systems.
Thus, a key management system will store credentials for deployment automation as well as inter-component communication. Such credentials will likely vary across development, stage, and production pipelines.
These keys may also need to be periodically rotated, which places additional burden on the developer. Having a service which will automatically [re]-generate the keys and configure application or its components with newly generated keys can significantly help reduce the key management burden on the developer.
SECURE AUTO SCALING. Applications deployed in cloud need to scale as load increases. Ideally, this scaling is done in an automatic manner. Such scaling implies that portions of incoming traffic will be routed to a newly spun application component responsible for handling the traffic. Such new routing must also be secure. That is, any credentials needed must be added to the newly spun or decommissioned component at run-time.
MONITORING must be configured for an application components whether it is a fresh deployment, an upgrade, or auto-scale. Monitoring encompasses traditional metrics such as CPU, memory, disk, and network; application-specific metrics, and logs. Monitoring can be passive or active.
As part of monitoring configuration, malware and anti-virus may also be configured.
APPLICATION NETWORK SCANS and AUTOMATED PENETRATION TESTING. When an application is deployed or updated, appropriate network scans and penetration tests must be performed on it before exposing it to the general users. Typically, network scans make use of tools such as
Nessus, while penetration tests are typically done through a manual process. These scans and tests can be automatically invoked upon a new deploy or commit in dev/staging/production.
With some aid from developer, the penetration tests can also be automatically performed, contributing to an automated secure devops pipeline.
ENCRYPTED / INTEGRITY STORAGE. Applications deployed in cloud may require the underlying storage to be encrypted or provide guarantees against tampering. A cloud may provide encrypted and integrity storage as part of its offerings. Application developer may configure the use of encrypted and integrity storage in an application pattern or otherwise.
DevOps personnel may specify the use of encrypted storage as part of application patterns. DevOps may bring their own keys or have the cloud auto-generate the keys for encrypting storage. Both types of keys need to be managed, similar to user and API keys.
NETWORK AND APPLICATION FIREWALLS / IDS AND APPLIANCES may also need to be deployed to meet regulatory compliance and good security practice.
In application pattern templates, a developer may indicate the use of compliance regime. The cloud can then automatically deploy and configure network and firewall appliances on behalf of the user.
Admittedly, deploying and configuring network and application firewalls is a "black art". It is often very difficult to get it right due to myriad configurations.
By following the "secure by default principle" and converting the most commonly used aspects of these appliances into functions delivered through APIs, the deployment and configuration of these devices can be integrated into a secure devops pipeline.
TESTING requires various components of an application, and the application as a whole to be continuously tested. The tests, unit, functional, or integration, must be written by developers and their invocation should automatically be done as part of build or deploy phases.
Secure DevOps - Run Phase
Run phase invokes certain capabilities of develop and build, and deploy and test phases in a continuous manner. These capabilities, explained earlier, include:
- CODE / PACKAGE VULNERABILITY SCAN
- MALWARE / VIRUS SCAN
- APPLICATION NETWORK SCANS
- AUTOMATED PENETRATION TESTS
Following additional capabilities are needed in the run phase.
APPLICATION / CLOUD CONFIGURATION VALIDATION. Once applications are deployed in cloud, the cloud configuration also needs to be validated. Cloud configurations encompass configurations of various cloud-based services such as firewalls, encrypted storage, key lengths, security groups, geographical distribution and so on. Thus, as part of running the application on cloud, both application and cloud configurations need to be validated together.
SCANNING FOR SENSITIVE INFORMATION IN LOGS. Sensitive information in logs such as personal health information (PHI) or social security numbers need to be scrubbed from application logs that may otherwise be viewed for debugging or administrative purposes. If best practices were followed for development, such information would not have ended in logs in the first place. Nevertheless, the scanning service may tag sensitive pieces of information in logs, which may need to be scrubbed or removed all together.
EVENT LOGGING AND MONITORING that was configured in the deploy and test phase must be monitored for any application events, malfunction, or incidents.
ENCRYPTION OF COMMUNICATION. The communication from end-users of an application of among various components of an application must be appropriately encrypted, meeting the security and compliance guidelines. This communication must be periodically monitored, especially along configuration changes, to ensure that communication that was encrypted using say high strength ciphers has not been downgrade to use a low-strength cipher as a result of an update.
If secure devops pipeline was completely followed, all application component communication will likely be encrypted or in isolated networks. The setup of encryption can be done as part of specification in application pattern.
INTEGRITY MONITORING. The integrity of data stored by an application must be monitored through APIs. Any update to the data must be recorded and be alertable so that appropriate actions can be taken.
OPERATIONS of service are fully automated, and changes are logged. These changes include bring up or tear down of services, logging for administrative purposes and so on.
Secure DevOps - Decommission
Eventually, part of an application component or an entire application may need to be decommissioned. Partial decommission may happen, for example, if the load on application decreases. Full decommission may happen if an application is no longer needed. As part of decommissioning, scrubbing of resources may be needed. These resources include:
LOG / INFORMATION SCRUBBING
PHYSICAL AND VIRTUAL RESOURCES
Among virtual resource decommissioning, any keys that may have been used but are not longer needed must be appropriately deleted.
Secure DevOps - Policy and Verification That Puts it All Together
Since security features are delivered via an API as part of secure devops pipeline, there is a need for having a policy in place that checks for violation of security features, and a verifier engine that validates the results of various features along the secure devops pipeline.
In summary, delivering all security functions through an API, and making them readily consumable by developer is not easy. The ingredients that will make a secure devops pipeline possible are key management (user, automation, application), scanning (code, package, configuration, network, malware, virus), testing (penetration and functionality), patterns, logging (API, access), and authn/authz.