The National Telecommunications and Information Administration (NTIA) recently asked for wide-ranging feedback to define a minimum Software Bill of Materials (SBOM). It was framed with a single, simple question (“What is an SBOM?”), and constituted an incredibly important step towards software security and a significant moment for open standards.

From NTIA’s SBOM FAQ  “A Software Bill of Materials (SBOM) is a complete, formally structured list of components, libraries, and modules that are required to build (i.e. compile and link) a given piece of software and the supply chain relationships between them. These components can be open source or proprietary, free or paid, and widely available or restricted access.”  SBOMs that can be shared without friction between teams and companies are a core part of software management for critical industries and digital infrastructure in the coming decades.

The ISO International Standard for open source license compliance (ISO/IEC 5230:2020 – Information technology — OpenChain Specification) requires a process for managing a bill of materials for supplied software. This aligns with the NTIA goals for increased software transparency and illustrates how the global industry is addressing challenges in this space. For example, it has become a best practice to include an SBOM for all components in supplied software, rather than isolating these materials to open source.

The open source community identified the need for and began to address the challenge of SBOM “list of ingredients” over a decade ago. The de-facto industry standard, and most widely used approach today, is called Software Package Data Exchange (SPDX). All of the elements in the NTIA proposed minimum SBOM definition can be addressed by SPDX today, as well as broader use-cases.

SPDX evolved organically over the last decade to suit the software industry, covering issues like license compliance, security, and more. The community consists of hundreds of people from hundreds of companies, and the standard itself is the most robust, mature, and adopted SBOM in the market today. 

The full SPDX specification is only one part of the picture. Optional components such as SPDX Lite, developed by Pioneer, Sony, Hitachi, Renesas, and Fujitsu, among others, provide a focused SBOM subset for smaller supplier use. The nature of the community approach behind SPDX allows practical use-cases to be addressed as they arose.

In 2020, SPDX was submitted to ISO via the PAS Transposition process of Joint Technical Committee 1 (JTC1) in collaboration with the Joint Development Foundation. It is currently in the approval phase of the transposition process and can be reviewed on the ISO website as ISO/IEC PRF 5962.

The Linux Foundation has prepared a submission for NTIA highlighting knowledge and experience gained from practical deployment and usage of SBOM in the SPDX and OpenChain communities. These include isolating the utility of specific actions such as tracking timestamps and including data licenses in metadata. With the backing of many parties across the worldwide technology industry, the SPDX and OpenChain specifications are constantly evolving to support all stakeholders.

Industry Comments

The Sony team uses various approaches to managing open source compliance and governance… An example is using an OSS management template sheet based on SPDX Lite, a compact subset of the SPDX standard. Teams need to be able to review the type, version, and requirements of software quickly, and using a clear standard is a key part of this process.

Hisashi Tamai, SVP, Sony Group Corporation, Representative of the Software Strategy Committee

“Intel has been an early participant in the development of the SPDX specification and utilizes SPDX, as well as other approaches, both internally and externally for a number of open source software use-cases.”

Melissa Evers, Vice President – Intel Architecture, Graphics, Software / General Manager – Software Business Strategy

Scania corporate standard 4589 (STD 4589) was just made available to our suppliers and defines the expectations we have when Open Source is part of a delivery to Scania. So what is it we ask for in a relationship with our suppliers when it comes to Open Source? 

1) That suppliers conform to ISO/IEC 5230:2020 (OpenChain). If a supplier conforms to this specification, we feel confident that they have a professional management program for Open Source.  

2) If in the process of developing a solution for Scania, a supplier makes modifications to Open Source components, we would like to see those modifications contributed to the Open Source project. 

3) Supply a Bill of materials in ISO/IEC DIS 5962 (SPDX) format, plus the source code where there’s an obligation to offer the source code directly, so we don’t need to ask for it.

Jonas Öberg, Open Source Officer – Scania (Volkswagen Group)

The SPDX format greatly facilitates the sharing of software component data across the supply chain. Wind River has provided a Software Bill of Materials (SBOM) to its customers using the SPDX format for the past eight years. Often customers will request SBOM data in a custom format. Standardizing on SPDX has enabled us to deliver a higher quality SBOM at a lower cost.

Mark Gisi, Wind River Open Source Program Office Director and OpenChain Specification Chair

The Black Duck team from Synopsys has been involved with SPDX since its inception, and I had the pleasure of coordinating the activities of the project’s leadership for more than a decade. In addition, representatives from scores of companies have contributed to the important work of developing a standard way of describing and communicating the content of a software package.

Phil Odence, General Manager, Black Duck Audits, Synopsys

With the rapidly increasing interest in the types of supply chain risk that a Software Bill of Materials helps address, SPDX is gaining broader attention and urgency. FossID (now part of Snyk) has been using SPDX from the start as part of both software component analysis and for open source license audits. Snyk is stepping up its involvement too, already contributing to efforts to expand the use cases for SPDX by building tools to test out the draft work on vulnerability profiles in SPDX v3.0.

Gareth Rushgrove, Vice President of Products, Snyk

For more information on OpenChain: https://www.openchainproject.org/

For more information on SPDX: https://spdx.dev/

References:

Author: Kate Stewart, VP of Dependable Systems, The Linux Foundation

In a previous Linux Foundation blog, David A. Wheeler, director of LF Supply Chain Security, discussed how capabilities built by Linux Foundation communities can be used to address the software supply chain security requirements set by the US Executive Order on Cybersecurity. 

One of those capabilities, SPDX, completely addresses the Executive Order 4(e) and 4(f) and 10(j) requirements for a Software Bill of Materials (SBOM). The SPDX specification is implemented as a file format that identifies the software components within a larger piece of computer software and metadata such as the licenses of those components. 

SPDX is an open standard for communicating software bill of material (SBOM) information, including components, licenses, copyrights, and security references. It has a rich ecosystem of existing tools that provides a common format for companies and communities to share important data to streamline and improve the identification and monitoring of software.

SBOMs have numerous use cases. They have frequently been used in areas such as license compliance but are equally useful in security, export control, and broader processes such as mergers and acquisitions (M&A) processes or venture capital investments. SDPX maintains an active community to support various uses, modeling its governance and activity on the same format that has successfully supported open source software projects over the past three decades.

The LF has been developing and refining SPDX for over ten years and has seen extensive uptake by companies and projects in the software industry.  Notable recent examples are the contributions by companies such as Hitachi, Fujitsu, and Toshiba in furthering the standard via optional profiles like “SPDX Lite” in the SPDX 2.2 specification release and in support of the SPDX SBOMs in proprietary and open source automation solutions. 

This de facto standard has been submitted to ISO via the Joint Development Foundation using the PAS Transposition process of Joint Technical Committee 1 (JTC1). It is currently in the enquiry phase of the process and can be reviewed on the ISO website as ISO/IEC DIS 5962.

There is a wide range of open source tooling, as well as commercial tool options emerging as well as options available today.  Companies such as FOSSID and Synopsys have been working with the SPDX format for several years. Open Source tools like FOSSology (source code Analysis),  OSS Review Toolkit (Generation from CI & Build infrastructure), Tern (container content analysis), Quartermaster (build extensions), ScanCode (source code analysis) in addition to the SPDX-tools project have also standardized on using SPDX for the interchange are also participating in Automated Compliance Tooling (ACT) Project Umbrella.  ACT has been discussed as community-driven solutions for software supply chain security remediation as part of our synopsis of the findings in the Vulnerabilities in the Core study, which was published by the Linux Foundation and Harvard University LISH in February of 2020.   

One thing is clear: A software bill of materials that can be shared without friction between different teams and companies will be a core part of software development and deployment in this coming decade. The sharing of software metadata will take different forms, including manual and automated reviews, but the core structures will remain the same. 

Standardization in this field, as in others, is the key to success. This domain has an advantage in that we are benefiting from an entire decade of prior work in SPDX. Therefore the process becomes the implementation of this standard to the various domains rather than the creation, expansion, or additional refinement of new or budding approaches to the matter.

Start using the SPDX specification here:https://spdx.github.io/spdx-spec/. Development of the next revision is underway, so If there’s a use case you can’t represent with the current specification, open an issue, this is the right window for input.   

To learn more about the many facets of the SPDX project see: https://spdx.dev/

Our communities take security seriously and have been instrumental in creating the tools and standards that every organization needs to comply with the recent US Executive Order

Overview

The US White House recently released its Executive Order (EO) on Improving the Nation’s Cybersecurity (along with a press call) to counter “persistent and increasingly sophisticated malicious cyber campaigns that threaten the public sector, the private sector, and ultimately the American people’s security and privacy.”

In this post, we’ll show what the Linux Foundation’s communities have already built that support this EO and note some other ways to assist in the future. But first, let’s put things in context.

The Linux Foundation’s Open Source Security Initiatives In Context

We deeply care about security, including supply chain (SC) security. The Linux Foundation is home to some of the most important and widely-used OSS, including the Linux kernel and Kubernetes. The LF’s previous Core Infrastructure Initiative (CII) and its current Open Source Security Foundation (OpenSSF) have been working to secure OSS, both in general and in widely-used components. The OpenSSF, in particular, is a broad industry coalition “collaborating to secure the open source ecosystem.”

The Software Package Data Exchange (SPDX) project has been working for the last ten years to enable software transparency and the exchange of software bill of materials (SBOM) data necessary for security analysis. SPDX is in the final stages of review to be an ISO standard, is supported by global companies with massive supply chains, and has a large open and closed source tooling support ecosystem. SPDX already meets the requirements of the executive order for SBOMs.

Finally, several LF foundations have focused on the security of various verticals. For example,  LF Public Health and LF Energy have worked on security in their respective sectors. Our cloud computing industry collaborating within CNCF has also produced a guide for supporting software supply chain best practices for cloud systems and applications.

Given that context, let’s look at some of the EO statements (in the order they are written) and how our communities have invested years in open collaboration to address these challenges.

Best Practices

The EO 4(b) and 4(c) says that

The “Secretary of Commerce [acting through NIST] shall solicit input from the Federal Government, private sector, academia, and other appropriate actors to identify existing or develop new standards, tools, and best practices for complying with the standards, procedures, or criteria [including] criteria that can be used to evaluate software security, include criteria to evaluate the security practices of the developers and suppliers themselves, and identify innovative tools or methods to demonstrate conformance with secure practices [and guidelines] for enhancing software supply chain security.” Later in EO 4(e)(ix) it discusses “attesting to conformity with secure software development practices.”

The OpenSSF’s CII Best Practices badge project specifically identifies best practices for OSS, focusing on security and including criteria to evaluate the security practices of developers and suppliers (it has over 3,800 participating projects). LF is also working with SLSA (currently in development) as potential additional guidance focused on addressing supply chain issues further.

Best practices are only useful if developers understand them, yet most software developers have never received education or training in developing secure software. The LF has developed and released its Secure Software Development Fundamentals set of courses available on edX to anyone at no cost. The OpenSSF Best Practices Working Group (WG) actively works to identify and promulgate best practices. We also provide a number of specific standards, tools, and best practices, as discussed below.

Encryption and Data Confidentiality

The EO 3(d) requires agencies to adopt “encryption for data at rest and in transit.” Encryption in transit is implemented on the web using the TLS (“https://”) protocol, and Let’s Encrypt is the world’s largest certificate authority for TLS certificates.

In addition, the LF Confidential Computing Consortium is dedicated to defining and accelerating the adoption of confidential computing. Confidential computing protects data in use (not just at rest and in transit) by performing computation in a hardware-based Trusted Execution Environment. These secure and isolated environments prevent unauthorized access or modification of applications and data while in use.

Supply Chain Integrity

The EO 4(e)(iii) states a requirement for

 “employing automated tools, or comparable processes, to maintain trusted source code supply chains, thereby ensuring the integrity of the code.” 

The LF has many projects that support SC integrity, in particular:

  • in-toto is a framework specifically designed to secure the integrity of software supply chains.
  • The Update Framework (TUF) helps developers maintain the security of software update systems, and is used in production by various tech companies and open source organizations.  
  • Uptane is a variant of TUF; it’s an open and secure software update system design which protects software delivered over-the-air to the computerized units of automobiles.
  • sigstore is a project to provide a public good / non-profit service to improve the open source software supply chain by easing the adoption of cryptographic software signing (of artifacts such as release files and container images) backed by transparency log technologies (which provide a tamper-resistant public log). 
  • OpenChain (ISO 5230) is the International Standard for open source license compliance. Application of OpenChain requires identification of OSS components. While OpenChain by itself focuses more on licenses, that identification is easily reused to analyze other aspects of those components once they’re identified (for example, to look for known vulnerabilities).

Software Bill of Materials (SBOMs) support supply chain integrity; our SBOM work is so extensive that we’ll discuss that separately.

Software Bill of Materials (SBOMs)

Many cyber risks come from using components with known vulnerabilities. Known vulnerabilities are especially concerning in key infrastructure industries, such as the national fuel pipelines,  telecommunications networks, utilities, and energy grids. The exploitation of those vulnerabilities could lead to interruption of supply lines and service, and in some cases, loss of life due to a cyberattack.

One-time reviews don’t help since these vulnerabilities are typically found after the component has been developed and incorporated. Instead, what is needed is visibility into the components of the software environments that run these key infrastructure systems, similar to how food ingredients are made visible.

A Software Bill of Materials (SBOM) is a nested inventory or a list of ingredients that make up the software components used in creating a device or system. This is especially critical as it relates to a national digital infrastructure used within government agencies and in key industries that present national security risks if penetrated. Use of SBOMs would improve understanding of the operational and cyber risks of those software components from their originating supply chain.

The EO has extensive text about requiring a software bill of materials (SBOM) and tasks that depend on SBOMs:

  • EO 4(e) requires providing a purchaser an SBOM “for each product directly or by publishing it on a public website” and “ensuring and attesting… the integrity and provenance of open source software used within any portion of a product.” 
  • It also requires tasks that typically require SBOMs, e.g., “employing automated tools, or comparable processes, that check for known and potential vulnerabilities and remediate them, which shall operate regularly….” and “maintaining accurate and up-to-date data, provenance (i.e., origin) of software code or components, and controls on internal and third-party software components, tools, and services present in software development processes, and performing audits and enforcement of these controls on a recurring basis.” 
  • EO 4(f) requires publishing “minimum elements for an SBOM,” and EO 10(j) formally defines an SBOM as a “formal record containing the details and supply chain relationships of various components used in building software…  The SBOM enumerates [assembled] components in a product… analogous to a list of ingredients on food packaging.”

The LF has been developing and refining SPDX for over ten years; SPDX is used worldwide and is in the process of being approved as ISO/IEC Draft International Standard (DIS) 5962.  SPDX is a file format that identifies the software components within a larger piece of computer software and metadata such as the licenses of those components. SPDX 2.2 already supports the current guidance from the National Telecommunications and Information Administration (NTIA) for minimum SBOM elements. Some ecosystems have ecosystem-specific conventions for SBOM information, but SPDX can provide information across all arbitrary ecosystems.

SPDX is real and in use today, with increased adoption expected in the future. For example:

  • An NTIA “plugfest” demonstrated ten different producers generating SPDX. SPDX supports acquiring data from different sources (e.g., source code analysis, executables from producers, and analysis from third parties). 
  • A corpus of some LF projects with SPDX source SBOMs is available. 
  • Various LF projects are working to generate binary SBOMs as part of their builds, including yocto and Zephyr
  • To assist with further SPDX adoption, the LF is paying to write SPDX plugins for major package managers.

Vulnerability Disclosure

No matter what, some vulnerabilities will be found later and need to be fixed. EO 4(e)(viii) requires “participating in a vulnerability disclosure program that includes a reporting and disclosure process.” That way, vulnerabilities that are found can be reported to the organizations that can fix them. 

The CII Best Practices badge passing criteria requires that OSS projects specifically identify how to report vulnerabilities to them. More broadly, the OpenSSF Vulnerability Disclosures Working Group is working to help “mature and advocate well-managed vulnerability reporting and communication” for OSS. Most widely-used Linux distributions have a robust security response team, but the Alpine Linux distribution (widely used in container-based systems) did not. The Linux Foundation and Google funded various improvements to Alpine Linux, including a security response team.

We hope that the US will update its Vulnerabilities Equities Process (VEP) to work more cooperatively with commercial organizations, including OSS projects, to share more vulnerability information. Every vulnerability that the US fails to disclose is a vulnerability that can be found and exploited by attackers. We would welcome such discussions.

Critical Software

It’s especially important to focus on critical software — but what is critical software? EO 4(g) requires the executive branch to define “critical software,” and 4(h) requires the executive branch to “identify and make available to agencies a list of categories of software and software products… meeting the definition of critical software.”

Linux Foundation and the Laboratory for Innovation Science at Harvard (LISH) developed the report Vulnerabilities in the Core,’ a Preliminary Report and Census II of Open Source Software, which analyzed the use of OSS to help identify critical software. The LF and LISH are in the process of updating that report. The CII identified many important projects and assisted them, including OpenSSL (after Heartbleed), OpenSSH,  GnuPG, Frama-C, and the OWASP Zed Attack Proxy (ZAP). The OpenSSF Securing Critical Projects Working Group has been working to better identify critical OSS projects and to focus resources on critical OSS projects that need help. There is already a first-cut list of such projects, along with efforts to fund such aid.

Internet of Things (IoT)

Unfortunately, internet-of-things (IoT) devices often have notoriously bad security. It’s often been said that “the S in IoT stands for security.” 

EO 4(s) initiates a pilot program to “educate the public on the security capabilities of Internet-of-Things (IoT) devices and software development practices [based on existing consumer product labeling programs], and shall consider ways to incentivize manufacturers and developers to participate in these programs.” EO 4(t) states that such “IoT cybersecurity criteria” shall “reflect increasingly comprehensive levels of testing and assessment.”

The Linux Foundation develops and is home to many of the key components of IoT systems. These include:

  • The Linux kernel, used by many IoT devices. 
  • The yocto project, which creates custom Linux-based systems for IoT and embedded systems. Yocto supports full reproducible builds. 
  • EdgeX Foundry, which is a flexible OSS framework that facilitates interoperability between devices and applications at the IoT edge, and has been downloaded millions of times. 
  • The Zephyr project, which provides a real-time operating system (RTOS) used by many for resource-constrained IoT devices and is able to generate SBOM’s automatically during build. Zephyr is one of the few open source projects that is a CVE Numbering Authority.
  • The seL4 microkernel, which is the most assured operating system kernel in the world; it’s notable for its comprehensive formal verification.

Security Labeling

EO 4(u) focuses on identifying:

“secure software development practices or criteria for a consumer software labeling program [that reflects] a baseline level of secure practices, and if practicable, shall reflect increasingly comprehensive levels of testing and assessment that a product may have undergone [and] identify, modify, or develop a recommended label or, if practicable, a tiered software security rating system.”

The OpenSSF’s CII Best Practices badge project (noted earlier) specifically identifies best practices for OSS development, and is already tiered (passing, silver, and gold). Over 3,800 projects currently participate.

There are also a number of projects that relate to measuring security and/or broader quality:

Conclusion

The Linux Foundation (LF) has long been working to help improve the security of open source software (OSS), which powers systems worldwide. We couldn’t do this without the many contributions of time, money, and other resources from numerous companies and individuals; we gratefully thank them all.  We are always delighted to work with anyone to improve the development and deployment of open source software, which is important to us all.

David A. Wheeler, Director of Open Source Supply Chain Security at the Linux Foundation

Linux Foundation Blog Post Abstract Graphic

Every month there seems to be a new software vulnerability showing up on social media, which causes open source program offices and security teams to start querying their inventories to see how FOSS components they use may impact their organizations. 

Frequently this information is not available in a consistent format within an organization for automatic querying and may result in a significant amount of email and manual effort. By exchanging software metadata in a standardized software bill of materials (SBOM) format between organizations, automation within an organization becomes simpler, accelerating the discovery process and uncovering risk so that mitigations can be considered quickly. 

In the last year, we’ve also seen standards like OpenChain (ISO/IEC 5320:2020) gain adoption in the supply chain. Customers have started asking for a bill of materials from their suppliers as part of negotiation and contract discussions to conform to the standard. OpenChain has a focus on ensuring that there is sufficient information for license compliance, and as a result, expects metadata for the distributed components as well. A software bill of materials can be used to support the systematic review and approval of each component’s license terms to clarify the obligations and restrictions as it applies to the distribution of the supplied software and reduces risk. 

Kate Stewart, VP, Dependable Embedded Systems, The Linux Foundation, will host a complimentary mentorship webinar entitled Generating Software Bill Of Materials on Thursday, March 25 at 7:30 am PST. This session will work through the minimum elements included in a software bill of materials and detail the reasoning behind why those elements are included. To register, please click here

There are many ways this software metadata can be shared. The common SBOM document format options (SPDX, SWID, and CycloneDX) will be reviewed so that the participants can better understand what is available for those just starting. 

This mentorship session will work through some simple examples and then guide where to find the next level of details and further references. 

At the end of this session, participants will be on a secure footing and a path towards the automated generation of SBOMs as part of their build and release processes in the future. 

Jason Perlow, Director of Project Insights and Editorial Content at the Linux Foundation, had an opportunity to speak with Shuah Khan about her experiences as a woman in the technology industry. She discusses how mentorship can improve the overall diversity and makeup of open source projects, why software maintainers are important for the health of open source projects such as the Linux kernel, and how language inclusivity and codes of conduct can improve relationships and communication between software maintainers and individual contributors.

JP: So, Shuah, I know you wear many different hats at the Linux Foundation. What do you call yourself around here these days?

SK: <laughs> Well, I primarily call myself a Kernel Maintainer & Linux Fellow. In addition to that, I focus on two areas that are important to the continued health and sustainability of the open source projects in the Linux ecosystem. The first one is bringing more women into the Kernel community, and additionally, I am leading the mentorship program efforts overall at the Linux Foundation. And in that role, in addition to the Linux Kernel Mentorship, we are looking at how the Linux Foundation mentorship program is working overall, how it is scaling. I make sure the LFX Mentorship platform scales and serves diverse mentees and mentors’ needs in this role. 

The LF mentorships program includes several projects in the Linux kernel, LFN, HyperLedger, Open MainFrame, OpenHPC, and other technologies. The Linux Foundation’s Mentorship Programs are designed to help developers with the necessary skills–many of whom are first-time open source contributors–experiment, learn, and contribute effectively to open source communities. 

The mentorship program has been successful in its mission to train new developers and make these talented pools of prospective employees trained by experts to employers. Several graduated mentees have found jobs. New developers have improved the quality and security of various open source projects, including the Linux kernel. Several Linux kernel bugs were fixed, a new subsystem mentor was added, and a new driver maintainer is now part of the Linux kernel community. My sincere thanks to all our mentors for volunteering to share their expertise.

JP: How long have you been working on the Kernel?

SK: Since 2010, or 2011, I got involved in the Android Mainlining project. My first patch removed the Android pmem driver.

JP: Wow! Is there any particular subsystem that you specialize in?

SK: I am a self described generalist. I maintain the kernel self-test subsystem, the USB over IP driver, usbip tool, and the cpupower tool. I contributed to the media subsystem working on Media Controller Device Allocator API to resolve shared device resource management problems across device drivers from different subsystems.

JP: Hey, I’ve actually used the USB over IP driver when I worked at Microsoft on Azure. And also, when I’ve used AWS and Google Compute. 

SK: It’s a small niche driver used in cloud computing. Docker and other containers use that driver heavily. That’s how they provide remote access to USB devices on the server to export devices to be imported by other systems for use.

JP: I initially used it for IoT kinds of stuff in the embedded systems space. Were you the original lead developer on it, or was it one of those things you fell into because nobody else was maintaining it?

SK: Well, twofold. I was looking at USB over IP because I like that technology. it just so happened the driver was brought from the staging tree into the Mainline kernel, I volunteered at the time to maintain it. Over the last few years, we discovered some security issues with it, because it handles a lot of userspace data, so I had a lot of fun fixing all of those. <laugh>.

JP: What drew you into the Linux operating system, and what drew you into the kernel development community in the first place?

SK: Well, I have been doing kernel development for a very long time. I worked on the LynxOS RTOS, a while back, and then HP/UX, when I was working at HP, after which I transitioned into  doing open source development — the OpenHPI project, to support HP’s rack server hardware, and that allowed me to work much more closely with Linux on the back end. And at some point, I decided I wanted to work with the kernel and become part of the Linux kernel community. I started as an independent contributor.

JP: Maybe it just displays my own ignorance, but you are the first female, hardcore Linux kernel developer I have ever met. I mean, I had met female core OS developers before — such as when I was at Microsoft and IBM — but not for Linux. Why do you suppose we lack women and diversity in general when participating in open source and the technology industry overall?

SK: So I’ll answer this question from my perspective, from what I have seen and experienced, over the years. You are right; you probably don’t come across that many hardcore women Kernel developers. I’ve been working professionally in this industry since the early 1990s, and on every project I have been involved with, I am usually the only woman sitting at the table. Some of it, I think, is culture and society. There are some roles that we are told are acceptable to women — even me, when I was thinking about going into engineering as a profession. Some of it has to do with where we are guided, as a natural path. 

There’s a natural resistance to choosing certain professions that you have to overcome first within yourself and externally. This process is different for everybody based on their personality and their origin story. And once you go through the hurdle of getting your engineering degree and figuring out which industry you want to work in, there is a level of establishing credibility in those work environments you have to endure and persevere. Sometimes when I would walk into a room, I felt like people were looking at me and thinking, “why is she here?” You aren’t accepted right away, and you have to overcome that as well. You have to go in there and say, “I am here because I want to be here, and therefore, I belong here.” You have to have that mindset. Society sends you signals that “this profession is not for me” — and you have to be aware of that and resist it. I consider myself an engineer that happens to be a woman as opposed to a woman engineer.

JP: Are you from India, originally?

SK: Yes.

JP: It’s funny; my wife really likes this Netflix show about matchmaking in India. Are you familiar with it?

SK: <laughs> Yes I enjoyed the series, and A Suitable Girl documentary film that follows three women as they navigate making decisions about their careers and family obligations.

JP: For many Americans, this is our first introduction to what home life is like for Indian people. But many of the women featured on this show are professionals, such as doctors, lawyers, and engineers. And they are very ambitious, but of course, the family tries to set them up in a marriage to find a husband for them that is compatible. As a result, you get to learn about the traditional values and roles they still want women to play there — while at the same time, many women are coming out of higher learning institutions in that country that are seeking technical careers. 

SK: India is a very fascinatingly complex place. But generally speaking, in a global sense, having an environment at home where your parents tell you that you may choose any profession you want to choose is very encouraging. I was extremely fortunate to have parents like that. They never said to me that there was a role or a mold that I needed to fit into. They have always told me, “do what you want to do.” Which is different; I don’t find that even here, in the US. Having that support system, beginning in the home to tell you, “you are open to whatever profession you want to choose,” is essential. That’s where a lot of the change has to come from. 

JP: Women in technical and STEM professions are becoming much more prominent in other countries, such as China, Japan, and Korea. For some reason, in the US, I tend to see more women enter the medical profession than hard technology — and it might be a level of effort and perceived reward thing. You can spend eight years becoming a medical doctor or eight years becoming a scientist or an engineer, and it can be equally difficult, but the compensation at the end may not be the same. It’s expensive to get an education, and it takes a long time and hard work, regardless of the professional discipline.

SK: I have also heard that women also like to enter professions where they can make a difference in the world — a human touch, if you will. So that may translate to them choosing careers where they can make a larger impact on people — and they may view careers in technology as not having those same attributes. Maybe when we think about attracting women to technology fields, we might have to promote technology aspects that make a difference. That may be changing now, such as the LF Public Health (LFPH) project we kicked off last year. And with LF AI & Data Foundation, we are also making a difference in people’s lives, such as detecting earthquakes or analyzing climate change. If we were to promote projects such as these, we might draw more women in.

JP: So clearly, one of the areas of technology where you can make a difference is in open source, as the LF is hosting some very high-concept and existential types of projects such as LF Energy, for example — I had no idea what was involved in it and what its goals were until I spoke to Shuli Goodman in-depth about it. With the mentorship program, I assume we need this to attract fresh talent — because as folks like us get older and retire, and they exit the field, we need new people to replace them. So I assume mentorship, for the Linux Foundation, is an investment in our own technologies, correct?

SK: Correct. Bringing in new developers into the fold is the primary purpose, of course — and at the same time, I view the LF as taking on mentorship provides that neutral, level playing field across the industry for all open source projects. Secondly, we offer a self-service platform, LFX Mentorship, where anyone can come in and start their project. So when the COVID-19 pandemic began, we expanded this program to help displaced people — students, et cetera, and less visible projects. Not all projects typically get as much funding or attention as others do — such as a Kubernetes or  Linux kernel — among the COVID mentorship program projects we are funding. I am particularly proud of supporting a climate change-related project, Using Machine Learning to Predict Deforestation.

The self-service approach allows us to fund and add new developers to projects where they are needed. The LF mentorships are remote work opportunities that are accessible to developers around the globe. We see people sign up for mentorship projects from places we haven’t seen before, such as Africa, and so on, thus creating a level playing field. 

The other thing that we are trying to increase focus on is how do you get maintainers? Getting new developers is a starting point, but how do we get them to continue working on the projects they are mentored on? As you said, someday, you and I and others working on these things are going to retire, maybe five or ten years from now. This is a harder problem to solve than training and adding new developers to the project itself.

JP: And that is core to our software supply chain security mission. It’s one thing to have this new, flashy project, and then all these developers say, “oh wow, this is cool, I want to join that,” but then, you have to have a certain number of people maintaining it for it to have long-term viability. As we learned in our FOSS study with Harvard, there are components in the Linux operating system that are like this. Perhaps even modules within the kernel itself, I assume that maybe you might have only one or two people actively maintaining it for many years. And what happens if that person dies or can no longer work? What happens to that code? And if someone isn’t familiar with that code, it might become abandoned. That’s a serious problem in open source right now, isn’t it?

SK: Right. We have seen that with SSH and other security-critical areas. What if you don’t have the bandwidth to fix it? Or the money to fix it? I ended up volunteering to maintain a tool for a similar reason when the maintainer could no longer contribute regularly. It is true; we have many drivers where maintainer bandwidth is an issue in the kernel. So the question is, how do we grow that talent pool?

JP: Do we need a job board or something? We need X number of maintainers. So should we say, “Hey, we know you want to join the kernel project as a contributor, and we have other people working on this thing, but we really need your help working on something else, and if you do a good job, we know tons of companies willing to hire developers just like you?” 

SK: With the kernel, we are talking about organic growth; it is just like any other open source project. It’s not a traditional hire and talent placement scenario. Organically they have to have credibility, and they have to acquire it through experience and relationships with people on those projects. We just talked about it at the previous Linux Plumbers Conference, we do have areas where we really need maintainers, and the MAINTAINERS file does show areas where they need help. 

To answer your question, it’s not one of those things where we can seek people to fill that role, like LinkedIn or one of the other job sites. It has to be an organic fulfillment of that role, so the mentorship program is essential in creating those relationships. It is the double-edged sword of open source; it is both the strength and weakness. People need to have an interest in becoming a maintainer and also a commitment to being one, long term.

JP: So, what do you see as the future of your mentorship and diversity efforts at the Linux Foundation? What are you particularly excited about that is forthcoming that you are working on?

SK: I view the Linux Foundation mentoring as a three-pronged approach to provide unstructured webinars, training courses, and structured mentoring programs. All of these efforts combine to advance a diverse, healthy, and vibrant open source community. So over the past several months, we have been morphing our speed mentorship style format into an expanded webinar format — the LF Live Mentorship series. This will have the function of growing our next level of expertise. As a complement to our traditional mentorship programs, these are webinars and courses that are an hour and a half long that we hold a few times a month that tackle specific technical areas in software development. So it might cover how to write great commit logs, for example, for your patches to be accepted, or how to find bugs in C code. Commit logs are one of those things that are important to code maintenance, so promoting good documentation is a beneficial thing. Webinars provide a way for experts short on time to share their knowledge with a few hours of time commitment and offer a self-paced learning opportunity to new developers.

Additionally, I have started the Linux Kernel Mentorship forum for developers and their mentors to connect and interact with others participating in the Linux Kernel Mentorship program and graduated mentees to mentor new developers. We kicked off Linux Kernel mentorship Spring 2021 and are planning for Summer and Fall.

A big challenge is we are short on mentors to be able to scale the structured program. Solving the problem requires help from LF member companies and others to encourage their employees to mentor, “it takes a village,” they say.

JP: So this webinar series and the expanded mentorship program will help developers cultivate both hard and soft skills, then.

SK: Correct. The thing about doing webinars is that if we are talking about this from a diversity perspective, they might not have time for a full-length mentorship, typically like a three-month or six-month commitment. This might help them expand their resources for self-study. When we ask for developers’ feedback about what else they need to learn new skill sets, we hear that they don’t have resources, don’t have time to do self-study, and learn to become open source developers and software maintainers. This webinar series covers general open source software topics such as the Linux kernel and legal issues. It could also cover topics specific to other LF projects such as CNCF, Hyperledger, LF Networking, etc.

JP: Anything else we should know about the mentorship program in 2021?

SK: In my view,  attracting diversity and new people is two-fold. One of the things we are working on is inclusive language. Now, we’re not talking about curbing harsh words, although that is a component of what we are looking at. The English you and I use in North America isn’t the same English used elsewhere. As an example, when we use North American-centric terms in our email communications, such as when a maintainer is communicating on a list with people from South Korea, something like “where the rubber meets the road” may not make sense to them at all. So we have to be aware of that.

JP: I know that you are serving on the Linux kernel Code of Conduct Committee and actively developing the handbook. When I first joined the Linux Foundation, I learned what the Community Managers do and our governance model. I didn’t realize that we even needed to have codes of conduct for open source projects. I have been covering open source for 25 years, but I come out of the corporate world, such as IBM and Microsoft. Codes of Conduct are typically things that the Human Resources officer shows you during your initial onboarding, as part of reviewing your employee manual. You are expected to follow those rules as a condition of employment. 

So why do we need Codes of Conduct in an open source project? Is it because these are people who are coming from all sorts of different backgrounds, companies, and ways of life, and may not have interacted in this form of organized and distributed project before? Or is it about personalities, people interacting with each other over long distance, and email, which creates situations that may arise due to that separation?

SK: Yes, I come out of the corporate world as well, and of course, we had to practice those codes of conduct in that setting. But conduct situations arise that you have to deal with in the corporate world. There are always interpersonal scenarios that can be difficult or challenging to work with — the corporate world isn’t better than the open source world in that respect. It is just that all of that happens behind a closed setting.

But there is no accountability in the open source world because everyone participates out of their own free will. So on a small, traditional closed project, inside the corporate world, where you might have 20 people involved, you might get one or two people that could be difficult to work with. The same thing happens and is multiplied many times in the open source community, where you have hundreds of thousands of developers working across many different open source projects. 

The biggest problem with these types of projects when you encounter situations such as this is dealing with participation in public forums. In the corporate world, this can be addressed in private. But on a public mailing list, if you are being put down or talked down to, it can be extremely humiliating. 

These interactions are not always extreme cases; they could be simple as a maintainer or a lead developer providing negative feedback — so how do you give it? It has to be done constructively. And that is true for all of us.

JP: Anything else?

SK: In addition to bringing our learnings and applying this to the kernel project, I am also doing this on the ELISA project, where I chair the Technical Steering Committee, where I am bridging communication between experts from the kernel and the safety communities. To make sure we can use the kernel the best ways in safety-critical applications, in the automotive and medical industry, and so on. Many lessons can be learned in terms of connecting the dots, defining clearly what is essential to make Linux run effectively in these environments, in terms of dependability. How can we think more proactively instead of being engaged in fire-fighting in terms of security or kernel bugs? As a result of this, I am also working on any necessary kernel changes needed to support these safety-critical usage scenarios.

JP: Before we go, what are you passionate about besides all this software stuff? If you have any free time left, what else do you enjoy doing?

SK: I read a lot. COVID quarantine has given me plenty of opportunities to read. I like to go hiking, snowshoeing, and other outdoor activities. Living in Colorado gives me ample opportunities to be in nature. I also like backpacking — while I wasn’t able to do it last year because of COVID — I like to take backpacking trips with my son. I also love to go to conferences and travel, so I am looking forward to doing that again as soon as we are able.

Talking about backpacking reminded me of the two-day, 22-mile backpacking trip during the summer of 2019 with my son. You can see me in the picture above at the end of the road, carrying a bearbox, sleeping bag, and hammock. It was worth injuring my foot and hurting in places I didn’t even know I had.

JP: Awesome. I enjoyed talking to you today. So happy I finally got to meet you virtually.

New survey reveals why contributors work on open source projects and how much time they spend on security

SAN FRANCISCO, Calif., December 8, 2020 – The Linux Foundation’s Open Source Security Foundation (OpenSSF) and the Laboratory for Innovation Science at Harvard (LISH) today announced the release of a new report, “Report on the 2020 FOSS Contributor Survey,” which details the findings of a contributor survey administered by the organizations and focused on how contributors engage with open source software. The research is part of an ongoing effort to study and identify ways to improve the security and sustainability of open source software.

The FOSS (Free and Open Source Software) contributor survey and report follow the Census II analysis released earlier this year. This combined pair of works represents important steps towards understanding and addressing structural and security complexities in the modern-day supply chain where open source is pervasive but not always understood. Census II identified the most commonly used free and open source software (FOSS) components in production applications, while the FOSS Contributor Survey and report shares findings directly from nearly 1,200 respondents working on them and other FOSS software.

“The modern economy – both digital and physical – is increasingly reliant on free and open source software,” said Frank Nagle, assistant professor at Harvard Business School. “Understanding FOSS contributor motivations and behavior is a key piece of ensuring the future security and sustainability of this critical infrastructure.”

Key findings from the FOSS Contributor Survey include:

  • The top three motivations for contributors are non-monetary. While the overwhelming majority of respondents (74.87 percent) are already employed full-time and more than half (51.65 percent) are specifically paid to develop FOSS, motivations to contribute focused on adding a needed feature or fix, enjoyment of learning and fulfilling a need for creative or enjoyable work.
  • There is a clear need to dedicate more effort to the security of FOSS, but the burden should not fall solely on contributors. Respondents report spending, on average, just 2.27 percent of their total contribution time on security and express little desire to increase that time. The report authors suggest alternative methods to incentivizing security-related efforts.
  • As more contributors are paid by their employer to contribute, stakeholders need to balance corporate and project interests. The survey revealed that nearly half (48.7 percent) of respondents are paid by their employer to contribute to FOSS, suggesting strong support for the stability and sustainability of open source projects but drawing into question what happens if corporate interest in a project diminishes or ceases.
  • Companies should continue the positive trend of corporate support for employees’ contribution to FOSS. More than 45.45 percent of respondents stated they are free to contribute to FOSS without asking permission, compared to 35.84 percent ten years ago. However, 17.48 percent of respondents say their companies have unclear policies on whether they can contribute and 5.59 percent were unaware of what  policies – if any – their employer had.

“Understanding open source contributor behaviors, especially as they relate to security, can help us better apply resources and attention to the world’s most-used software,” said David A. Wheeler, director of open source supply chain security at the Linux Foundation. “It is clear from the 2020 findings that we need to take steps to improve security without overburdening contributors and the findings suggest several ways to do that.”

For an in-depth analysis of these findings, suggested actions and more, please access the full report here: https://www.linuxfoundation.org/blog/2020/12/download-the-report-on-the-2020-foss-contributor-survey

The report authors are Frank Nagle, Harvard Business School; David A. Wheeler, the Linux Foundation; Hila Lifshitz-Assaf, New York University; and Haylee Ham and Jennifer L. Hoffman, Laboratory for Innovation Science at Harvard. They will host a webinar tomorrow, December 9, at 10 am ET. Please register here: https://events.linuxfoundation.org/webinar-why-wont-developers-write-secure-os-software/

The FOSS Contributor Report & Survey is expected to take place again in 2021. For contributors who would like to participate, please sign up here: https://hbs.qualtrics.com/jfe/form/SV_erjkjzXJ2Eo0TDD

About the OpenSSF

Hosted by the Linux Foundation, the OpenSSF is a cross-industry organization that brings together the industry’s most important open source security initiatives and the individuals and companies that support them. It combines the Linux Foundation’s Core Infrastructure Initiative (CII), founded in response to the 2014 Heartbleed bug, and the Open Source Security Coalition, founded by the GitHub Security Lab, to build a community to support the open source security for decades to come. The OpenSSF is committed to collaboration and working both upstream and with existing communities to advance open source security for all.

About LISH

As a university-wide initiative, the Laboratory for Innovation Science at Harvard (LISH) is spurring the development of a science of innovation through a systematic program of solving real-world innovation challenges while simultaneously conducting rigorous scientific research. To date, LISH has worked with key partners in aerospace and healthcare, such as NASA, the Harvard Medical School, the Broad Institute, and the Scripps Research Institute to solve complex problems and develop impactful solutions. More information can be found at https://lish.harvard.edu/

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

###

Media Contact
Jennifer Cloer
Story Changes Culture
503-867-2304
jennifer@storychangesculture.com

We at The Linux Foundation (LF) work to develop secure software in our foundations and projects, and we also work to secure the infrastructure we use. But we’re all human, and mistakes can happen.

So if you discover a security vulnerability in something we do, please tell us!

If you find a security vulnerability in the software developed by one of our foundations or projects, please report the vulnerability directly to that foundation or project. For example, Linux kernel security vulnerabilities should be reported to <security@kernel.org> as described in security bugs. If the foundation/project doesn’t state how to report vulnerabilities, please ask them to do so. In many cases, one way to report vulnerabilities is to send an email to <security@DOMAIN>.

If you find a security vulnerability in the Linux Foundation’s infrastructure as a whole, please report it to <security@linuxfoundation.org>, as noted on our contact page.

For example, security researcher Hanno Böck recently alerted us that some of the retired linuxfoundation.org service subdomains were left delegated to some cloud services, making them potentially vulnerable to a subdomain takeover. Once we were alerted to that, the LF IT Ops Team quickly worked to eliminate the problem and will also be working on a way to monitor and alert about such problems in the future. We thank Hanno for alerting us!

We’re also working to make open source software (OSS) more secure in general. The Open Source Security Foundation (OpenSSF) is a broad initiative to secure the OSS that we all depend on. Please check out the OpenSSF if you’re interested in learning more.

David A. Wheeler

Director, Open Source Supply Chain Security, The Linux Foundation

Free training opportunities, new member investments, consolidation with Core Infrastructure Initiative and new opportunities for anyone to contribute accelerate work on open source security

 

SAN FRANCISCO, Calif., Oct 29, 2020 OpenSSF, a cross-industry collaboration to secure the open source ecosystem, today announced free training for developing secure software, a new OpenSSF professional certificate program called Secure Software Development Fundamentals and additional program and technical initiatives. It is also announcing new contributors to the Foundation and newly elected advisory council and governing board members.

Open source software has become pervasive across industries, and ensuring its security is of primary importance. The OpenSSF, hosted at the Linux Foundation, provides a structured forum for a collaborative, cross-industry effort. The foundation is committed to working both upstream and with existing communities to advance open source security for all.

Open Source Security Training and Education

OpenSSF has developed a set of three free courses on how to develop secure software on the non-profit edX learning platform. These courses are intended for software developers (including DevOps professionals, software engineers, and web application developers) and others interested in learning how to develop secure software. The courses are specifically designed to teach professionals how to develop secure software while reducing damage and increasing the speed of the response when a vulnerability is found.

The OpenSSF training program includes a Professional Certificate program, Secure Software Development Fundamentals, which can allow individuals to demonstrate they’ve mastered this material. Public enrollment for the courses and certificate is open now. Course content and the Professional Certificate program tests will become available on November 5.

“The OpenSSF has already demonstrated incredible momentum which underscores the increasing priorities placed on open source security,” said Mike Dolan, Senior VP and GM of Projects at The Linux Foundation. “We’re excited to offer the Secure Software Development Fundamentals professional certificate program to support an informed talent pool about open source security best practices.”

New Member Investments

Sixteen new contributors have joined as members of OpenSSF since earlier this year: Arduino; AuriStor; Canonical; Debricked; Facebook; Huawei Technologies; iExec Blockchain Tech; Laboratory for Innovation Science at Harvard (LISH); Open Source Technology Improvement Fund; Polyverse Corporation; Renesas; Samsung; Spectral; SUSE; Tencent; Uber; and WhiteSource. For more information on founding and new members, please visit: https://openssf.org/about/members/

Core Infrastructure Initiative Projects Integrate with OpenSSF

The OpenSSF is also bringing together existing projects from the Core Infrastructure Initiative (CII), including the CII Census (a quantitative analysis to identify critical OSS projects) and CII FOSS Contributor Survey (a quantitative survey of FOSS developers). Both will become part of the OpenSSF Securing Critical Projects working group. These two efforts will continue to be implemented by the Laboratory for Innovation Science at Harvard (LISH). The CII Best Practices badge project is also being transitioned into the OpenSSF.

OpenSSF Leadership

The OpenSSF has elected Kay Williams from Microsoft as Governing Board Chair. Newly elected Governing Board members include:

  • Jeffrey Eric Altman, AuriStor, Inc.;
  • Lech Sandecki, Canonical;
  • Anand Pashupathy, Intel Corporation; and
  • Dan Lorenc from Google as Technical Advisory Committee (TAC) representative.

An election for a Security Community Individual Representative to the Governing Board is currently underway and results will be announced by OpenSSF in November. Ryan Haning from Microsoft has been elected Chair of the Technical Advisory Council (TAC).

There will be an OpenSSF Town Hall on Monday, November 9, 2020, 10:00a -12:00p PT, to share updates and celebrate accomplishments during the first three months of the project.  Attendees will hear from our Governing Board, Technical Advisory Council and Working Group leads, have an opportunity for Q+A and learn more about how to get involved in the project. Register here.

Membership is not required to participate in the OpenSSF. For more information and to learn how to get involved, including information about participating in working groups and advisory forums, please visit https://openssf.org/getinvolved.

 

New Member Comments

Arduino

“As an open-source company, Arduino always considered security as a top priority for us and for our community,” said Massimo Banzi, Arduino co-founder. ’”We are excited to join the Open Source Security Foundation and we look forward to collaborating with other members to improve the security of any open-source ecosystem.”

AuriStor

“One of the strengths of the open protocols and open source software ecosystems is the extensive reuse of code and APIs which expands the spread of security vulnerabilities across software product boundaries.  Tracking the impacted downstream software projects is a time-consuming and expensive process often reaching into the tens of thousands of U.S. dollars.  In Pixar’s Ratatouille, Auguste Gusteau was famous for his belief that “anyone can cook”.  The same is true for software: “anyone can code” but the vast majority of software developers have neither the resources or incentives to prioritize security-first development practices nor to trace and notify impact downstream projects.  AuriStor joins the OSSF to voice the importance of providing resources to the independent developers responsible for so many critical software components.” – Jeffrey Altman, Founder and CEO or AuriStor.

Canonical Group

“It is our collective responsibility to constantly improve the security of open source ecosystem, and we’re excited to join the Open Source Security Foundation,” said Lech Sandecki, Security Product Manager at Canonical. “As publishers of Ubuntu, the most popular Linux distribution, we deliver up to 10 years of security maintenance to millions of Ubuntu users worldwide. By sharing our knowledge and experience with the OSFF community, together, we can make the whole open source more secure.”

Debricked

“The essence of open source is collaboration, and we strongly believe that the OSSF initiative will improve open source security at large. With all of the members bringing something different to the table we can create a diverse community where knowledge, experience and best practices can help shape this space to the better. Debricked has a strong background in research and extensive insight in tooling; knowledge which we hope will be a valuable contribution to the working groups,” said Daniel Wisenhoff, CEO and co-founder of Debricked.

Huawei

“With open source software becoming a crucial foundation in today’s world, how to ensure its security is the responsibility of every stakeholder. We believe the establishment of the Open Source Security Foundation will drive common understanding and best practices on the security of the open source supply chain and will benefit the whole industry,” said Peixin Hou, Chief Expert on Open System and Software, Huawei. “We look forward to making contributions to this collaboration and working with everybody in an open manner. This reaffirms Huawei’s long-standing commitment to make a better, connected and more secure and intelligent world.”

Laboratory for Innovation Science at Harvard

“We are excited to bring the Core Infrastructure Initiative’s research on the prevalence and current practices of open source into this broader network of industry and foundation partners,” said Frank Nagle, Assistant Professor at Harvard Business School and Co-Director of the Core Infrastructure Initiative at the Laboratory for Innovation Science at Harvard. “Only through coordinated, strategically targeted efforts – among competitors and collaborators alike – can we effectively address the challenges facing open source today.”

Open Source Technology Improvement Fund

“OSTIF is thrilled to collaborate with industry leaders and apply it’s methodology and broad expertise for securing open-source technology on a larger scale. The level of engagement across organizations and industries is inspiring, and we look forward to participating via the Securing Critical Projects Working Group,” said Chief Operating Officer Amir Montazery. “Linux Foundation and OpenSSF have been instrumental in aligning efforts towards improving open-source software, and OSTIF is grateful to be involved in the process.”

Polyverse

“Polyverse is honored to be a member of OpenSSF. The popularity of open source as the ‘go-to’ option for mission critical data, systems and solutions has brought with it increased cyberattacks. Bringing together organizations to work on this problem collaboratively is exactly what open source is all about and we’re eager to accelerate progress in this area,” said Archis Gore, CTO, Polyverse.

Renesas

“Renesas provides embedded processors for various application segments, including automotive, industrial automation, and IoT. Renesas is committed to ensuring the integrity and confidentiality of systems and data while mitigating cybersecurity risks. To enable our customers to develop robust systems, it is essential to provide root-of-trust of the open source software that runs on our products,” said Shinichi Yoshioka, Senior Vice President and CTO of Renesas. “We are excited to join the Open Source Security Foundation and to collaborate with industry-leading security professionals to advance more secure computing environments for the society.”

Samsung

“Samsung is trying to provide best-in-class security with our technologies and activities. Not only are security risks reviewed and removed in all development phases of our products, but they are also monitored continuously and patched quickly,” said Yong Ho Hwang, Corporate Vice President and Head of Samsung Research Security Team, Samsung Electronics. “Open source is one of the best approaches to drive cross-industry effort in responding quickly and transparently to security threats. Samsung will continue to be a leader in providing high-level security by actively contributing and collaborating with the Open Source Security Foundation.”

Spectral

“Spectral’s mission is to enable developers to build and ship software at scale without worry. We feel that the OpenSSF initiative is the perfect venue to discuss and improve open source security and is a natural platform that empowers developers. The Spectral team is happy to participate in the working groups and share their expertise in security analysis and research of technology stacks at scale, developer experience (DX) and tooling, open source codebases analysis and trends, developer behavioral analysis, though the ultimate goal of improving open source security and developer happiness,” said Dotan Nahum, CEO and co-founder of Spectral.

SUSE

“At SUSE, we power innovation in data centers, cars, phones, satellites and other devices. It has never been more critical to deliver trustworthy security from the core all the way to the edge,” said Markus Noga, VP Solutions Technology at SUSE. “We are committed to OpenSSF as the forum for the open source community to collaborate on vulnerability disclosures, security tooling, and to create best practices to keep all users of open source solutions safe.”

Tencent

“Tencent believes in the power of open source technology and collaboration to deliver incredible solutions to today’s challenges. As open source has become the de facto way to build software, its security has become a critical component for building and maintaining the software and infrastructure,” said Mark Shan, Chair of Tencent Open Source Alliance and Board Chair of the TARS Foundation. “By bringing different organizations together, OpenSSF provides a platform where developers can collaboratively build solutions needed to protect the open source security supply chain. Tencent is very excited to join this collaborative effort as an OpenSSF member and contribute to its open source security initiatives and best practices.

WhiteSource

“In today’s world, software development teams simply cannot develop software at today’s pace without using open source. Our goal has always been to empower teams to harness the power of open source easily and securely. We’re honored to get the opportunity to join the Open Source Security Foundation where we can join forces with others to contribute, together, towards open source security best practices and initiatives.” David Habusha, VP Product.

About the Open Source Security Foundation (OpenSSF)

Hosted by the Linux Foundation, the OpenSSF (launched in August 2020) is a cross-industry organization that brings together the industry’s most important open source security initiatives and the individuals and companies that support them. It combines the Linux Foundation’s Core Infrastructure Initiative (CII), founded in response to the 2014 Heartbleed bug, and the Open Source Security Coalition, founded by the GitHub Security Lab to build a community to support the open source security for decades to come. The OpenSSF is committed to collaboration and working both upstream and with existing communities to advance open source security for all.

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page:  https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Contact
Jennifer Cloer
Story Changes Culture
503-867-2304
jennifer@storychangesculture.com

Introducing the Open Governance Network Model

Background

The Linux Foundation has long served as the home for many of the world’s most important open source software projects. We act as the vendor-neutral steward of the collaborative processes that developers engage in to create high quality and trustworthy code. We also work to build the developer and commercial communities around that code to sponsor each project’s members. We’ve learned that finding ways for all sorts of companies to benefit from using and contributing back to open source software development is key to the project’s sustainability. 

Over the last few years, we have also added a series of projects focused on lightweight open standards efforts — recognizing the critical complementary role that standards play in building the open technology landscape. Linux would not have been relevant if not for POSIX, nor would the Apache HTTPD server have mattered were it not for the HTTP specification. And just as with our open source software projects, commercial participants’ involvement has been critical to driving adoption and sustainability.

On the horizon, we envision another category of collaboration, one which does not have a well-established term to define it, but which we are today calling “Open Governance Networks.” Before describing it, let’s talk about an example.

Consider ICANN, the agency that arose after demands emerged from evolving the global domain name system (DNS) from its single-vendor control by Network Solutions. With ICANN, DNS became something more vendor-neutral, international, and accountable to the Internet community. It evolved to develop and manage the “root” of the domain name system, independent from any company or nation. ICANN’s control over the DNS comes primarily through its establishment of an operating agreement among domain name registrars that establishes rules for registrations, guarantees your domain names are portable, and a uniform dispute resolution protocol (the UDRP) for times when a domain name conflicts with an established trademark or causes other issues. 

ICANN is not a standards body; they happily use the standards for DNS developed at the IETF. They also do not create software other than software incidental to their mission, perhaps they also fund some DNS software development, but that’s not their core. ICANN is not where all DNS requests go to get resolved to IP addresses, nor even where everyone goes to register their domain name — that is all pushed to registrars and distributed name servers. In this way, ICANN is not fully decentralized but practices something you might call “minimum viable centralization.” Its management of the DNS has not been without critics, but by pushing as much of the hard work to the edge and focusing on being a neutral core, they’ve helped the DNS and the Internet achieve a degree of consistency, operational success, and trust that would have been hard to imagine building any other way. 

There are similar organizations that interface with open standards and software but perform governance functions. A prime example of this is the CA Browser Forum, who manages the root certificates for the SSL/TLS web security infrastructure.

Do we need such organizations? Can’t we go completely decentralized? While some cryptocurrency networks claim not to need formal human governance, it’s clear that there are governance roles performed by individuals and organizations within those communities. Quite a bit of governance is possible to automate via smart contracts (and repairing damage from exploiting them), promoting the platform’s adoption to new users, onboarding new organizations, or even coordinating hard fork upgrades still require humans in the mix. And this is especially important in environments where competitors need to participate in the network to succeed, but do not trust one competitor to make the decisions.

Network governance is not a solved problem

Network governance is not just an issue for the technical layers. As one moves up the stack into more domain-specific applications, it turns out that there are network governance challenges up here as well, which look very familiar.

Consider a typical distributed application pattern: supply chain traceability, where participants in the network can view, on a distributed database or ledger, the history of the movement of an object from source to destination, and update the network when they receive or send an object. You might be a raw materials supplier, or a manufacturer, or distributor, or retailer. In any case, you have a vested interest in not only being able to trust this distributed ledger to be an accurate and faithful representation of the truth. You also want the version you see to be the same ledger everyone else sees, be able to write to it fairly, and understand what happens if things go wrong. Achieving all of these desired characteristics requires network governance!

You may be thinking that none of this is strictly needed if only everyone agreed to use one organization’s centralized database to serve as the system of record. Perhaps that is a company like eBay, or Amazon, Airbnb, or Uber. Or perhaps, a non-profit charity or government agency can run this database for us. There are some great examples of shared databases managed by non-profits, such as Wikipedia, run by the Wikimedia Foundation. This scenario might work for a distributed crowdsourced encyclopedia, but would it work for a supply chain? 

This participation model requires everyone engaging in the application ecosystem to trust that singular institution to perform a very critical role — and not be hacked, or corrupted, or otherwise use that position of power to unfair ends. There is also a trust the entity will not become insolvent or otherwise unable to meet the community’s needs. How many Wikipedia entries have been hijacked or subject to “edit wars” that go on forever? Could a company trust such an approach for its supply chain? Probably not.

Over the last ten years, we’ve seen the development of new tools that allow us to build better-distributed data networks without that critical need for a centralized database or institution holding all the keys and trust. Most of these new tools use distributed ledger technology (“DLT”, or “blockchain”) to build a single source of truth across a network of cooperating peers, and embed programmatic functionality as “smart contracts” or “chaincode” across the network. 

The Linux Foundation has been very active in DLT, first with the launch of Hyperledger in December of 2015. The launch of the Trust Over IP Foundation earlier this year focused on the application of self-sovereign identity, and in many examples, usually using a DLT as the underlying utility network. 

As these efforts have focused on software, they left the development, deployment, and management of these DLT networks to others. Hundreds of such networks built on top of Hyperledger’s family of different protocol frameworks have launched, some of which (like the Food Trust Network) have grown to hundreds of participating organizations. Many of these networks were never intended to extend beyond an initial set of stakeholders, and they are seeing very successful outcomes. 

However, many of these networks need a critical mass of industry participants and have faced difficulty achieving their goal. A frequently cited reason is the lack of clear or vendor-neutral governance of the network. No business wants to place its data, or the data it depends upon, in the hands of a competitor; and many are wary even of non-competitors if it locks down competition or creates a dependency on a market participant. For example, what if the company doesn’t do well and decides to exit this business segment? And at the same time, for most applications, you need a large percentage of any given market to make it worthwhile, so addressing these kinds of business, risk, or political objections to the network structure is just as important as ensuring the software works as advertised.

In many ways, this resembles the evolution of successful open source projects, where developers working at a particular company realize that just posting their source code to a public repository isn’t sufficient. Nor even is putting their development processes online and saying “patches welcome.” 

To take an open source project to the point where it becomes the reference solution for the problem being solved and can be trusted for mission-critical purposes, you need to show how its governance and sustainability are not dependent upon a single vendor, corporate largess, or charity. That usually means a project looks for a neutral home at a place like the Linux Foundation, to provide not just that neutrality, but also competent stewarding of the community and commercial ecosystem.

Announcing LF Open Governance Networks

To address this need, today, we are announcing that the Linux Foundation is adding “Open Governance Networks” to the types of projects we host. We have several such projects in development that will be announced before the end of the year. These projects will operate very similarly to the Linux Foundation’s open source software projects, but with some additional key functions. Their core activities will include:

  • Hosting a technical steering committee to specify the software and standards used to build the network, to monitor the network’s health, and to coordinate upgrades, configurations, and critical bug fixes
  • Hosting a policy and legal committee to specify a network operating agreement the organizations must agree to for connecting their nodes to the network
  • Running a system for identity on the network, so participants to trust other participants who they say they are, monitor the network for health, and take corrective action if required.
  • Building out a set of vendors who can be hired to deploy peers-as-a-service on behalf of members, in addition to allowing members’ technical staff to run their own if preferred.
  • Convene a Governing Board composed of sponsoring members who oversee the budget and priorities.
  • Advocate for the network’s adoption by the relevant industry, including engaging relevant regulators and secondary users who don’t run their own peers.
  • Potentially manage an open “app store” approach to offering vetted re-usable deployable smart contracts of add-on apps for network users.

These projects will be sustained through membership dues set by the Governing Board on each project, which will be kept to what’s needed for self-sufficiency. Some may also choose to establish transaction fees to compensate operators of peers if usage patterns suggest that would be beneficial. Projects will have complete autonomy regarding technical and software choices – there are no requirements to use other Linux Foundation technologies. 

To ensure that these efforts live up to the word “open” and the Linux Foundation’s pedigree, the vast majority of technical activity on these projects, and development of all required code and configurations to run the software that is core to the network will be done publicly. The source code and documentation will be published under suitable open source licenses, allowing for public engagement in the development process, leading to better long-term trust among participants, code quality, and successful outcomes. Hopefully, this will also result in less “bike-shedding” and thrash, better visibility into progress and activity, and an exit strategy should the cooperation efforts hit a snag. 

Depending on the industry that it services, the ledger itself might or might not be public. It may contain information only authorized for sharing between the parties involved on the network or account for GDPR or other regulatory compliance. However, we will certainly encourage long term approaches that do not treat the ledger data as sensitive. Also, an organization must be a member of the network to run peers on the network, required to see the ledger, and particularly write to it or participate in consensus.

Across these Open Governance Network projects, there will be a shared operational, project management, marketing, and other logistical support provided by Linux Foundation personnel who will be well-versed in the platform issues and the unique legal and operational issues that arise, no matter which specific technology is chosen.

These networks will create substantial commercial opportunity:

  • For software companies building DLT-based applications, this will help you focus on the truly value-delivering apps on top of such a shared network, rather than the mechanics of forming these networks.
  • For systems integrators, DLT integration with back-office databases and ERP is expected to grow to be billions of dollars in annual activity.
  • For end-user organizations, the benefits of automating thankless, non-differentiating, perhaps even regulatorily-required functions could result in huge cost savings and resource optimization.

For those organizations acting as governing bodies on such networks today, we can help you evolve those projects to reach an even wider audience while taking off your hands the low margin, often politically challenging, grunt work of managing such networks.

And for those developers concerned before about whether such “private” permissioned networks would lead to dead cul-de-sacs of software and wasted effort or lost opportunity, having the Linux Foundation’s bedrock of open source principles and collaboration techniques behind the development of these networks should help ensure success.

We also recognize that not all networks should be under this model. We expect a diversity of approaches that will be long term sustainable, and encourage these networks to find a model that works for them. Let’s talk to see if it would be appropriate.

LF Governance Networks will enable our communities to establish their own Open Governance Network and have an entity to process agreements and collect transaction fees. This new entity is a Delaware nonprofit, a nonstock corporation that will maximize utility and not profit. Through agreements with the Linux Foundation, LF Governance Networks will be available to Open Governance Networks hosted at the Linux Foundation. 

If you’re interested in learning more about hosting an Open Governance Network at the Linux Foundation, please contact us at governancenetworks@linuxfoundation.org

Thanks!

Brian

Healthcare industry proof of concept successfully uses SPDX as a software bill of materials format for medical devices

Overview

Software Package Data Exchange (SPDX) is an open standard for communicating software bill of materials (SBOM) information that supports accurate identification of software components, explicit mapping of relationships between components, and the association of security and licensing information with each component. The SPDX format has recently been submitted by the Linux Foundation and the Joint Development Foundation to the JTC1 committee of the ISO for international standards approval.

A group of eight healthcare industry organizations, composed of five medical device manufacturers and three healthcare delivery organizations (hospital systems), recently participated in the first-ever proof of concept (POC) of the SPDX standard for healthcare use.

 This blog post is a summary of the results of this initial trial.

Why do we care about SBOMs and the medical device industry?

A Software Bill of Materials (SBOM) is a nested inventory or a list of ingredients that make up the software components used in creating a device or system. This is especially critical in the medical device industry and within healthcare delivery organizations to adequately understand the operational and cyber risks of those software components from their originating supply chain.

Some cyber risks come from using components with known vulnerabilities. Known vulnerabilities are a widespread problem in the software industry, such as known vulnerabilities in the Top 10 Web Application Security Risks from the Open Web Application Security Project (OWASP). Known vulnerabilities are especially concerning in medical devices since the exploitation of those vulnerabilities could lead to loss of life or maiming. One-time reviews don’t help, since these vulnerabilities are typically found after the component has been developed and incorporated. Instead, what is needed is visibility into the components of a medical device, similar to how food ingredients are made visible.

A measured path towards using SBOMs in the medical device industry

In June 2018, the National Telecommunications and Information Administration (NTIA) engaged stakeholders across multiple industries to discuss software transparency and to participate in a limited proof of concept (POC) to determine if SBOMs can be successfully produced by medical device manufacturers and consumed by healthcare delivery organizations. That initial POC was successfully concluded in the early fall of 2019. 

Despite the limited scope, the NTIA POC results demonstrated that industry-agnostic standard formats can be leveraged by the healthcare vertical and that industry-specific formats are unnecessary. 

Next, the participants in the NTIA POC explored whether a standardized SBOM format could be used for sharing information between medical device manufacturers and healthcare delivery organizations. For this next phase, the NTIA stakeholders engaged the Linux Foundation’s SPDX community to work with the NTIA Healthcare working group. The goal was to demonstrate through a proof of concept whether the open source SPDX SBOM format would be suitable for healthcare and medical device industry uses. The first phase of that trial was conducted in early 2020.

Objectives of the 2020 POC

The stated goals of this 2020 proof of concept (POC) were to prove the viability of the framing document created by the NTIA SBOM Working group (of which the Linux Foundation was a contributor) from their earlier POC for the medical device and healthcare industry. 

This NTIA framing document defines specific baseline data elements or fields that should be used to identify software components in any SBOM format, which can be mapped into corresponding field elements in SPDX:

NTIA BaselineSPDX
Supplier Name(3.5) PackageSupplier:
Component Name(3.1) PackageName:
Unique Identifier(3.2) SPDXID:
Version String(3.3) PackageVersion:
Component Hash(3.10) PackageChecksum;
Relationship(7.1) Relationship: CONTAINS
Author Name(2.8) Creator:

The 2020 POC conducted by NTIA working group had a stated objective to determine if SBOMs generated by Medical Device Manufacturers (MDMs) using SPDX could be ingested into SIEM (Security, Information and Event Management) solutions operated by the participating Healthcare Delivery Organizations (HDOs).

The MDMs included in this POC included Abbott, Medtronic, Philips, Siemens, and Thermo Fisher. The HDOs included Cedars-Sinai, Christiana Care, Mayo Clinic, Cleveland Clinic, Johns Hopkins, New York-Presbyterian, Partners/Mass General, and Sutter Health.

Execution and implementation of the SPDX SBOMs

  • The participating HDOs provided an inventory of the deployed medical devices in use within their organizations.
  • A best-effort approach was used to determine software identity as the names that software packages are known by are “ambiguous” and could be misinterpreted.
  • An example SPDX was created along with a guidance document for the MDMs to follow for use with the medical devices identified by the HDO inventory exercise.
  • The MDMs produced 17 distinct SPDX-based SBOMs manually and with generator tooling.
  • The SBOMs were delivered via secure transfer using enterprise Box accounts, simulating delivery via secure customer portals offered by each MDM.

Consumption of the SBOMs in the SPDX POC

As a result of the 2020 POC, all participating HDOs successfully ingested the SPDX SBOM into their respective SIEM solutions, immediately making the data searchable to identify security vulnerabilities across a fleet of products. This information can also be converted into a human-readable, tabular format for other data analysis systems.

Multiple HDOs are already collaborating with vendor partners to explore direct ingestion into medical device asset/risk management solutions as part of their device procurement. One of the HDOs is working with one of their vendor partners to explore direct ingestion into a healthcare Vendor Risk Management (VRM) solution, and another has developed a ”How-To Guide,” focusing on how to correctly parse out the Packages fields using regular expressions (regex). 

As a positive indicator of SPDX’s suitability when used with asset management systems, two HDOs have begun configuring their respective internal tracking systems to track software dependencies and subcomponents. Additionally, multiple HDOs are collaborating with vendor partners to manage devices into medical device asset/risk management solutions through the device’s life by allowing for periodic updates and an audit trail.

Ongoing considerations for SPDX-based SBOMs for medical devices in healthcare organizations

Risk management, vulnerability management, and legal considerations are ongoing at the participating HDOs related to the use of SPDX-based SBOMs.

Risk management

All of the responding HDOs are exploring vulnerability identification upon procurement (i.e., SIEM through initial ingestion of the SBOM) and on an on-going basis (i.e., SIEM, CMDB/CMMS, VRM). The participating HDOs intend to explore mitigation plan / compensating control exercises that will be performed to identify vulnerable components, measure exploitability, implement risk reduction techniques, and document this data alongside the SBOM.

The SPDX community intends to learn from these exercises and improve future versions of SPDX specification to include requested information determined to be needed to manage risk effectively.

Vulnerability management at HDOs

An HDO is already working with its Biomed team to manually perform vulnerability management processes on information extracted from SBOM data. 

Another is working with their Vulnerability Management team to evaluate correlated SBOM data to credentialed/non-credentialed scans of the same device, which may prove useful in an information audit use case. A second HDO is currently working with their Vulnerability Management team on leveraging the SBOM data to supplement regular scanning results.

Participating HDOs have been developing SBOM product security language to add cybersecurity safeguards to the contract documentation.

Conclusion

The original POC was able to validate the conclusions of the NTIA Working Group that proprietary SBOM formats specific to healthcare industry verticals are not needed. This 2020 POC showed that the SPDX standard could be used as an open format for SBOMs for use by healthcare industry providers. Additionally, the ability to import the SPDX format into SIEM solutions will help HDOs adequately understand the operational and cyber risks of medical device software components from their originating supply chain. 

There is work ahead to improve automation of SPDX-based SBOMs, including the automated identification of software components and determining which component vulnerabilities are exploitable in a given system. Participating HDOs intend to perform compensating control exercises to identify and implement risk reduction techniques building on this information. HDOs are also evaluating how SPDX can support other improvements to vulnerability management. In summary, this POC showed that SPDX could be an essential part of addressing today’s operational and cyber risks.