Wednesday, April 20, 2022

20 DevSecOps Best Practices

How can secure software be developed at
speed and scale!?

This is the ultimate IT paradox for modern global enterprise organisations: go faster and innovate. But always stay secure.

DevSecOps is the answer to integrating these seemingly contradictory enterprise challenges into a coherent and effective approach to software delivery. Essentially, it is the integration of security into DevOps practices. Taking a DevSecOps approach, security issues can be identified early in the development process rather than after a product is released.

How does it work?

Testing, monitoring and reporting are codified and embedded in the continuous delivery pipeline and fast feedback loops are then generated regarding the state of your infrastructure security, across your system.

Essentially, all the governance standards of your organisation can be ‘hardened’ into your infrastructure via code before you ever deploy applications onto it.

What Are the Benefits of DevSecOps?

There are many benefits of including security at every stage of the software delivery lifecycle. We’ve listed the key ones out below:

v  Cost reduction is achieved by detecting and fixing security issues during the development phases.

v  Speed of delivery is increased as security bottlenecks are minimised or eliminated.

v  Speed of recovery is enhanced in the case of a security incident by utilising templates and pet/cattle methodology.

v  Enhanced monitoring and auditing leads to improved threat hunting, which reduces the likelihood of a breach, avoiding bad publicity and reputational damage (to say nothing of regulator fines).

v  Immutable infrastructure allows companies to tear down infrastructure while managing an attack vector identified by scanning. If a node is compromised, it won’t remain compromised for long, as it will be torn down and rebuilt with new credentials. Zero defects in the code is the ideal to aim for, although zero variations are the minimum requirement.

v  Immutable infrastructure improves overall security by reducing vulnerabilities, and increasing code coverage and automation. It also encourages companies to move to the cloud instead of using depreciating and increasingly vulnerable hardware.

v  Security auditing, monitoring, and notification systems are managed and deployed so that they can be continuously enhanced, to keep in step with the frantic innovation intrinsic to cybercrime.

v  Ensures the ‘secure by design’ principle by using automated security review of code, automated application security testing, educating, and empowering developers to use secure design patterns.

v  Creates targeted customer value through secure iterative innovation at speed and scale.

v  Security is federated and becomes the responsibility of everyone, not just a specialised team, or even individual.

v  DevSecOps fosters a culture of openness and transparency from the earliest stages of development.

v  Increased sales as it is much easier to sell a demonstrably secure product.

In summary:

By taking a DevSecOps approach, the cost of complying with regulation and governance standards is reduced overall and the speed of software delivery is increased. Simultaneously, greater transparency enables superior threat hunting across the board and much more flexible reaction and recovery times. Fundamentally though, DevSecOps helps enterprises to innovate securely at speed and scale.

So what are the ingredients of a successful DevSecOps transformation? It’s time to take a closer look at the DevSecOps best practices across three key pillars: people, process and technology.

DevSecOps Best Practices

DevSecOps Best Practices: People

No matter how many technologies you decide to implement, the weakest link of that chain will always be the human factor, and this must be the starting point for any DevSecOps implementation.

One of the most important aspects of DevSecOps is challenging the way traditional security teams integrate with the wider business. Changing habits and raising awareness across all levels of a company are not easy tasks and require a top-down approach if attitudes are to change.

Let’s dive into some specific practices you can use when designing the people component of your transformation.

1. Breaking Down Barriers and Silos with Security Champions

For security to be effective, we need to include security concerns - and the security ‘mindset’ - as early as possible in the software delivery pipeline.

One way of doing is this is with security champions.

Security champions are members of a team that help to make decisions about when and how to address security concerns. Security champions act as the ‘voice’ of security for a given product or team, and they assist in the triage of security bugs for their team or area. They are evangelists for the security mindset, obsessively expounding on the importance of security across all areas!

 

Some of the most important duties of the security champion include the following:

  • Emphasize security concerns across all teams - not just the ‘Security Team’
  • Evangelize the ‘security mindset’
  • Ensure that security is not a blocker on active development or reviews
  • Empowered to make decisions
  • Work with AppSec team on mitigations strategies
  • Help with QA and Testing
  • Write Tests (from Unit Tests to Integration tests)
  • Help with development of CI (Continuous Integration) environments.

2. Training and Upskilling Your Staff

Any successful DevSecOps program will invest in good training and professional development for its staff.

Training must be rooted in company goals, policies, and standards for software security, and learning media must be flexible and tailored. To foster and develop good security staff, organizations must provide new hires with the appropriate training and tools they need to do their jobs well, and to contribute to the successful release of secure software.

Engaging specialist security and DevOps training organization(s) to raise staff skills and awareness are essential for maintaining consumer trust. Good training ensures that standards are implemented correctly.

3. Culture is Everything

There are several definitions of DevSecOps, but the one that stands out universally is collaboration, automation, learning, measurements, and sharing (CALMS), which was coined by Jez Humble and adopted further by Synopsys’s very own Meera Rao. At its core, DevSecOps thrives on a culture and a mindset in which various cross-functional teams share a single goal of continuous software security.

To embed a culture of DevSecOps, it’s best to start with a few self-motivated and committed teams that are aligned to the goals of strategic DevSecOps initiatives. The strategic initiatives act as guiderails for these teams while they work to ingrain DevSecOps culture into day-to-day functions, balancing security, speed, and scale. Once the pilot teams adopt DevSecOps and start showing visible benefits, they become examples to other teams that could follow their footsteps.

The key to fostering a DevSecOps culture and mindset is to operate in iterations and work upward from individual project teams to the entire organization.

DevSecOps Best Practices: Process

The word is sometimes spoken in hushed tones so as not to upset any engineers or developers: process.

But process doesn’t need to be a four letter word. It is a crucial enabler of the most challenging aspect of any DevSecOps transformation: your people.

As W. Edwards Deming (considered the grandfather of quality) once said: “A bad process will beat a good person every time.”

We strive to make sure that doesn’t happen. DevSecOps aims to align and implement common enterprise processes to facilitate cooperation and achieve more secure development processes.

Prior to implementing these processes, organizations would often respond too late and much too slowly to security issues. But when you step back, you realize that’s actually not surprising because, typically, processes are siloed within separate IT teams, which can lead to miscommunication, bottlenecks and, ultimately, delays. These bottlenecks and delays then increase your risk and will inevitably manifest as a lower bottom line.

DevSecOps, in contrast, makes it possible to create short, feedback-driven security loops that can quickly identify problems and react swiftly to them.

How does that work?

Let’s get back to our list of DevSecOps best practices–this time, it's all about process.

 

4. Integration of Processes

Often when organizations or teams start integrating security activities and scanners in a DevSecOps pipeline, they tend to enable an overwhelming scope of rulesets and scan configurations. This hampers DevSecOps adoption in two ways. First, development teams suddenly see many security findings in their queues, which makes it impossible for them to address them all over a short sprint, and that causes reluctance to fix security findings. Second, that loss of support and acceptance from development teams can threaten the entire DevSecOps culture.

It is key, therefore, to start small and early. Security testing should begin as far left in the SDLC as possible and should be done with a gradually increasing scope. For example, instead of enabling full scans or scans with the entire ruleset for a pre-commit security checkpoint, teams should consider keeping the ruleset limited to its top five vulnerabilities. The security activities that occur later in the SDLC can include deeper scans and reviews for prerelease security assurance.

5. Compliance

Implementing compliance doesn’t have to be a paper-based exercise. You can create metadata representing the compliance requirement and integrating it into your assets.

This can also be used by security policy automation by tagging assets that can implement the desired security architecture, for example, zoning. Imagine the ability to respond to a breach under the new GDPR rules in under 72 hours.

6. Version Control, Metadata, and Orchestration

Within an automated world, the only constant is change, and change needs to be both consistent and traceable. To track all changes, you must ensure that adequate and immutable versioning is in place.

To allow for quick recovery, every action needs a version, so that it can be managed in the same way that code is. Once turned into metadata, operations teams can efficiently track a change and measure it.

Orchestration software doesn’t only provide a repeatable way to deploy infrastructure, it also provides a huge amount of metadata regarding any task. This metadata can not only be used by the orchestration software itself, but as an authoritative source for integrated tooling. Once coupled with versioning, orchestration software becomes a powerful source of information for all operational teams.

 

7. Security Tooling in CI/CD

While it sounds perfectly logical to “build security in,” it’s easier said than done. One of the key challenges that teams face is a lack of understanding and tooling or processes to help build security into their software. Enabling teams to achieve this goal is vital to ensuring that they are able to build secure software.

Ensuring that software is secure starts even before writing code for it. Security activities such as threat modeling and architecture reviews can help set the course for the security requirements and controls to be implemented during the software development life cycle (SDLC). When implementing the requirements and controls, giving development teams enough training on how to write secure code and fix security issues is of utmost importance.

Ensuring visibility into security vulnerabilities also helps create awareness and much-needed feedback loops in identifying and fixing those vulnerabilities. For example, one way to give immediate feedback on the code is to use IDE-based scanners to identify unsecure code right in developer’s workstation. Such tooling enables developers to code securely and fix vulnerabilities early.

8. Incident Management

Responding to security incidents should not be an improvised or non-scripted activity. Workflows and action plans should be created in advance to ensure the response to an incident is consistent, repeatable, and measurable.

In a DevSecOps world, proactive and preemptive threat hunting, as well as continuous detection and response to threats and vulnerabilities, means that there are fewer major incidents and more mitigations.

 

 

9. Red Teams, Blue Teams and Bug Bounties

The use of red teams, blue teams and bug bounties also mitigate against breaches. The purpose of red teams is to test the effectiveness of security programs. Blue teams defend against the red team’s attacks.

All companies should deploy a red team to hunt for threats as part of the DevSecOps methodology. Red teams are built from security team personnel and are usually virtual to facilitate their ad hoc nature. Instead of discussing what is wrong with an application, the red team demonstrates what is wrong and provides the solution.

All companies should have a clear process for security researchers to disclose vulnerabilities. Otherwise, many do not get reported for fear of legal repercussions. It is also important that this be a secure method of communication as some countries have laws that would still put the individual who disclosed the information at risk if the vulnerability is disclosed in a way that could be intercepted (e.g. by email). Publishing a PGP (Pretty Good Privacy) key along with the method of communication gives you the best hope of being informed of current vulnerabilities -- hopefully before they are exploited.

All companies should also, occasionally, implement bug bounty programs -- rewards given for finding and reporting a bug in a software product.

10. Automation and Configuration Management

Automation is key when balancing security integrations with speed and scale. The adoption of DevOps already focuses on automation, and the same holds true for DevSecOps. Automating security tools and processes ensures teams are following DevSecOps best practices.

Automation ensures that tools and processes are used in a consistent, repeatable, and reliable manner. It’s important to identify which security activities and processes can be completely automated and which require some manual intervention. For example, running a SAST tool in a pipeline can be automated entirely; however, threat modeling and penetration testing require manual efforts so they cannot be automated. The same is true for processes. Sending feedback to stakeholders can be automated in a pipeline; however, a security sign-off requires some amount of manual intervention.

A successful automation strategy also depends on the tools and technology being used. One of the considerations in automation is whether a tool has enough interfaces to allow its integration with other subsystems. For example, to enable developers to do IDE scans, look for a SAST tool to have support for common IDE software. Similarly, to integrate a tool in a pipeline, review if the tool offers APIs or Webhooks or CLI interfaces that can be used to trigger scans and request reports. Another consideration is a tool’s ability to be used “as code,” including configuration as code or pipeline as code, which can determine the level of automation that can be achieved throughout the SDLC. As an example, a containerized AST tool can be deployed and run easily in an automated environment, due to its infrastructure as code capability. Similarly, CI systems, like Jenkins CI, allow defining global libraries to provide pipelines as code feature, which enables global AST integrations across large number of CI jobs.

11. Secure Coding Practices/Security as Code

Every organization that wants to integrate security into its DevOps workflows is likely to be torn between decisions about which security activities are needed and which type of tooling to buy. The key is to think first about when a security activity is performed in an SDLC. Each organization works in its own unique way when adopting DevSecOps, driven by its industry, maturity, and culture. The placement of security checkpoints will be unique as well.

For example, when developers have adequate training about coding securely, they often find it useful to perform security testing before code commits happen. That prevents developers from checking in unsecure code. For other organizations, the earliest starting point for security scanning could be in their central integration pipelines, which are normally triggered right after source code gets merged from developer branches to the main branch.

After determining when to perform security activities, each checkpoint can be used to indicate which security activity and tool is most applicable. In the example above, a pre-commit security scan or an IDE-based scan could be implemented to shift security testing further left in the development stage. Additionally, the central integration pipelines could have more security checkpoints implemented, with activities such as deeper static application security testing (SAST), software composition analysis (SCA), dynamic application security testing (DAST)/interactive application security testing (IAST), or penetration testing.

 

12. Host Hardening

The practice of host hardening is not new, but if it were used more often, fewer services and applications would be unnecessarily exposed to the internet. Countless examples of security incidents can be directly related to leaving a generic attack surface that allows automated attack tooling to succeed in even the most basic attacks.

Minimizing the attack surface by not installing or running anything that is not required for the core application and utilizing security features native to your OS (e.g. kernel security modules in Linux) make this task easier.

The Center of Internet Security has developed a set of industry-standard benchmarks for infrastructure hardening.

 

13. CI/CD for Patching

Once your metadata has been associated with each asset, we can use this data to implement patching at the CI/CD level. Feeds from threat intelligence and vulnerability management are compared to the deployed software stack to identify matches in the templates that are queued for deployment. Patching live systems becomes a thing of the past, thus limiting the impact of downtime. This will also provide the ability to determine risk exposure in near real time. 14.

 

14. Application-level Auditing and Scanning

Auditing and scanning are a crucial aspect of DevSecOps that allows business to fully understand their risk posture. Each of the following solutions represent a higher degree of security assurance of the code, as reflected in the organization’s risk appetite.

 

15. Source Code Scanning

Source code scanning should be covered by implementing Static Application Security Testing (SAST). SAST is used for scanning the source code repository, usually the master branch, identifying vulnerabilities and performing software composition analysis. It can be integrated into existing CI/CD processes.

 

16. Dynamic Application Scanning Tool (DAST)

Dynamic Application Scanning Tools are designed to scan staging and production websites in running state, analyze input fields, forms, and numerous aspects of the web application against vulnerabilities. It’s important to recognize that any time you allow users to provide you with data (form fields, query strings, HTTP headers, etc.), you are allowing them to provide data that your web server or application code will have to deal with.

 

17. Pre-Deployment Auditing

Pre-deployment auditing uses a pre-defined template for building assets to ensure the desired internally certified security level. The check is event-driven: when target code is changed, a check is triggered. Validations should be blocked and required to be integrated into a CD pipeline at this stage, since this is the last opportunity before the exit.

Traditional governance models significantly impede delivery velocity and are incompatible with the fundamental goal of DevSecOps—to ensure fast, safe, and secure delivery of software. As a result, along with security testing, governance activities should also be automated where possible.

Governance as code should be used to implement checks across the software delivery pipeline, and it should include required triggers for manual intervention to handle escalations, exceptions, and implementing compensating controls. As an example of governance, consider sign-off gates that an organization typically has before their releases. In many cases, the sign-off gates are the direct implementation of controls a project wants to have, so that state of security can be assessed before marking an important milestone in SDLC, a success. Collaborating, obtaining buy-ins, and enabling development and operations teams is key to ensuring that the governance model is inclusive and has the required adoption. Such enablement can be achieved using various feedback mechanisms, which include, but are not limited to:

  • Pausing the pipeline builds
  • Sending notifications
  • Creating defects and tracking them centrally
  • Breaking/stopping the pipeline build completely


18. Post-Deployment Auditing

Post-deployment auditing, compared to pre-deployment, is also event-driven, but the events that trigger checks include changes to policy, as well as code. So when either the infrastructure, or the standards (policies) that that infrastructure must meet, change, a check is triggered.

The idea behind Post-Deployment Auditing is to ensure that the certified security level which you achieved with Pre-Deployment Auditing is still applicable and valid. That’s why the number of Post-Deployment tests usually exceeds Pre-Deployment tests.

 

19. Automated Host/Container/External Vulnerability Scanning

Security vulnerabilities are commonly reported differently than quality and functional defects. Often, organizations maintain the two types of findings—security and quality—in two different locations. This reduces the visibility each team and role has when they look at the overall security posture of their project.

Maintaining security and quality findings in one place helps teams treat both types of issues in the same manner and with the same importance. In reality, security findings, especially ones from automated scanning tools, can potentially be a false positive. It becomes challenging, in such cases, to ask developers to review and fix those problems. One solution is to tune the security tooling over time by analyzing historical findings and application information, and by applying filters and custom rulesets to report only critical issues.

 

20. Secrets Management

‘Secrets’ in an information security environment include all the private information a team should know, for example, database credentials, or a third-party API. Secrets should be accessed or generated temporarily with specific authentication mechanisms that are different for each environment such that no one – not even the developers – can reverse the logic or exploit a backdoor around secrets by just having access to source code.

The main purpose of managing these secrets is to eliminate (or at least minimize) the potential for human error in the handling of such private information, e.g. losing it, accidentally posting it in a public forum such as GitHub etc. The ideal technique is a synchronized, encrypted, auto-generated secrets store in which entities are temporary, with as short a time-to-live (TTL) as possible.

 

References:

  • Tigera:  5 DevSecOps Best Practices You Must Implement to Succeed; https://www.tigera.io/learn/guides/devsecops/devsecops-bes

No comments:

Post a Comment

Thanks for your input, your ideas, critiques, suggestions are always welcome...

- Wasabi Roll Staff