The Risk of Undermanaged Open Source Software
Missed a session at the Data Summit? View on demand here.
There are many myths surrounding open source software, but one that continues to permeate the conversations is that open source is not as secure as proprietary offerings. At first glance, this claim seems to hold true, because how do you secure a supply chain for a product that is made in an environment where everyone can contribute?
But perceptions are changing, as open source code runs many of the most advanced computational workloads known to man. In fact, according to Red Hat’s 2022 The State of Enterprise Open Source report, 89% of respondents believe that open source software for enterprises is just as safe or more secure than proprietary software.
Even as misguided security vulnerabilities persist, it doesn’t seem to slow down open source adoption. Open source powers some of the world’s most recognizable companies that we rely on every day – from Netflix and Airbnb to Verizon and the American Red Cross. This usage continues to grow, with Forrester’s 2021 State of Application Security report indicating that 99% of audited codebases contain some amount of open source code. This would not be the case if the organizations deploying these solutions did not trust the security of the software used.
Relying on open source doesn’t mean you expose your organization to vulnerabilities, as long as you check the code for vulnerabilities. Unlike proprietary software, open source code is fully visible and therefore verifiable. So the key to using open source for business is to make sure you don’t underpower it. But while the opportunity is there, the expertise may not be, and the auditability often touted as an advantage of open source may not be for every organization using it. Many users don’t have the time, expertise, or resources to perform security audits of the open source they’re using, so we need to consider other avenues to get similar assurances in that code. Of course, when sensitive workloads are deployed, trust is not enough. “Trust but verify” is an important mantra to keep in mind.
There will always be some risk we take when it comes to technology, and software in particular. But since software is deeply rooted in everything we do, not using it isn’t an option; instead, we focus on risk mitigation. Knowing where you get your open source from is your first line of defense.
When it comes to open source software, there are two primary options for organizations: composite (or downstream) and community (or upstream). Upstream in open source refers to the community and project where contributions are made and releases are made. An example is the Linux kernel, which serves as the upstream project for all Linux distributions. Vendors can take the unmodified kernel source and then add patches, add a quirky configuration and build the kernel with the options they want to offer their users. This then becomes a composite, downstream open source offering or products.
Some risks are the same whether solutions are built with vendor-managed or upstream software; however, it is the responsibility for the maintenance and security of the code that changes. Let’s make some assumptions about a typical organization. That organization can identify where all of its open source comes from, and 85% of that is from a major vendor that it regularly works with. The remaining 15% consists of offers not available from the supplier of your choice and comes directly from upstream projects. For the 85% that come from a vendor, all security vulnerabilities, security metadata, announcements, and most importantly, security patches come from that vendor. In this scenario, the organization has one place to get all the necessary security information and updates. The organization doesn’t need to check the upstream code for newly discovered vulnerabilities and essentially just need to keep an eye on the vendor and apply any patches.
On the other hand, it is the user organization’s responsibility to safeguard the security of the remaining 15% of open source code obtained directly from upstream. It must constantly monitor projects for information about newly discovered vulnerabilities, patches and updates, which can take a significant amount of time and effort. And unless the organization has the resources to dedicate a team of people to manage this, systems can remain vulnerable, which can have costly consequences. In this hypothetical scenario, the uncurated open source is a much smaller percentage of your infrastructure, but the support burden for that 15% is definitely higher than the 85% provided by your vendor.
While at first glance it may seem like the same effort is required to patch upstream open source code and patch up vendor-supported open source code, there can be important differences. Most upstream projects provide solutions by updating the code in the most recent version (or branch) of the project. Therefore, patching a vulnerability requires an update to the most recent version, which can pose risks. That most recent version may contain additional changes that are incompatible with the organization’s use of the previous version, or may contain other issues that have not yet been discovered, simply because the code is newer.
Vendors that manage and support open source software often backport vulnerability fixes to older versions (essentially isolate the upstream modification of a later version that fixes a particular problem and apply it to an earlier version), providing a more stable solution. offers for applications using that software, while also addressing the newly discovered vulnerability. Backporting has been proven to reduce the risk of undiscovered vulnerabilities being introduced and older software that is actively patched for security vulnerabilities becomes more secure over time. Conversely, as new code is introduced in new versions of software, the risk of new security vulnerabilities is greater.
That’s not to say you shouldn’t use upstream open source. Organizations can, and do, use software directly from upstream projects. There are many reasons to use upstream open source in production environments, including cost savings and access to the latest features. And no enterprise vendor can deliver all the open source that consumers can use. GitHub alone hosts millions of projects, making it impossible for any vendor to support them all.
There will likely be upstream open source that will be immediately consumed, and this, along with any code written by the organization, is where most of the time and effort of an organization’s security team will focus. If that number is small enough, the cost and associated risk will also be smaller. Any organization will likely consume open source directly from upstream, and they need to be aware of that code, how and where it is used, and how to properly monitor upstream developments for potential security vulnerabilities. Ideally, organizations will end up with the majority of their open source sourced from an enterprise vendor, lowering the total cost of consumption and the associated risk of using it.
Securing the software supply chain
Knowing where your open source comes from is the first step to reducing exposure, but supply chain attacks are still increasing exponentially. According to Sonatype’s 2021 State of the Software Supply Chain report, 2021 saw a 650% increase in software supply chain attacks targeting vulnerabilities in upstream open source ecosystems. One of the most widely publicized attacks had nothing to do with open source code itself, but was instead an attack on the integrity of a company’s patch delivery process. And with the number of high-profile and costly security attacks on organizations that have been in the news in recent years, there is (rightly) more focus on supply chain security.
Different actions are needed to prevent or mitigate different types of attacks. In all cases, the principle of “trust but verify” is relevant.
Organizations can address this in part by moving security to the left in new ways. Historically, the shift in security to the left has focused on adding vulnerability analysis to the CI/CD pipeline. This is good “trust but verify” practice when using both vendor-supplied and upstream code. However, vulnerability analysis is not sufficient. In addition to the binaries produced by the pipeline, application implementations require additional configuration information. For workloads deployed on Kubernetes platforms, configuration information can be provided through Kubernetes PodSecurityContexts, ConfigMaps, deployments, operators, and/or Helm diagrams. Configuration data should also be scanned for potential risks, such as excessive privileges, including requests to access host volumes and host networks.
In addition, organizations must protect their supply chain from intrusion. To better support this effort, organizations are adopting new technologies in software pipelines, such as Tekton CD chains, which attest to the steps in the CI/CD pipeline, as well as technologies such as Sigstore, which make it easier to identify artifacts in the pipeline itself. have it signed. instead of afterwards.
Sigstore is an open source project that improves the security of software supply chains in an open, transparent and accessible way by making cryptographic signing easier. Digital signatures effectively freeze an object in time, indicating that it has been verified in its current state to be what it says it is and that it has not been altered in any way. By digitally signing the artifacts that make up applications, including the software list, component manifests, configuration files and the like, users gain insight into the chain of custody.
In addition, the proposed standards for the delivery of software bill of materials (SBOMs) have been around for quite some time, but we have reached the point where all organizations need to figure out how to deliver a software bill of materials. Standards need to be set not only around static information in SBOMs, but also around corresponding, but separate, dynamic information such as vulnerability data, where the software package has not been changed, but the vulnerabilities associated with that package.
While it may seem that security is a constantly moving target, due to the intense scrutiny of software security in recent years, more strategies and tools are being developed and implemented to mitigate risk. That said, it’s important to remember that effectively addressing security requires organizations to regularly review and iterate on their security policies and tool choices, and that all members of the organization are effectively involved and trained in these processes.
Kirsten Newcomer is director of cloud and DevSecOps strategy at Red Hat.
Vincent Danen is VP of Product Security at Red Hat.
VentureBeat’s mission is to be a digital city square for tech decision makers to learn about transformative business technology and transactions. Learn more
This post The Risk of Undermanaged Open Source Software
was original published at “https://venturebeat.com/2022/03/20/the-risk-of-undermanaged-open-source-software/”