The Open Source Security Foundation (OpenSSF) officially launched on August 3, 2020. In this article, we’ll look at why the OpenSSF was formed, what it’s accomplished in its first six months, and its plans for the future.

The world depends on open source software (OSS), so OSS security is vital. Various efforts have been created to help improve OSS security. These efforts include the Core Infrastructure Initiative (CII) in the Linux Foundation, the Open Source Security Coalition (OSSC) founded by the GitHub Security Lab, and the Joint Open Source Software Initiative (JOSSI) founded by Google and others.

It became apparent that progress would be easier if these efforts merged into a single effort. The OpenSSF was created in 2020 as a merging of these three groups into “a cross-industry collaboration that brings together leaders to improve the security of open source software (OSS).”

The OpenSSF has certainly gained that “cross-industry collaboration”; its dozens of members include (alphabetically) Canonical, GitHub, Google, IBM, Intel, Microsoft, and Red Hat. Its governing board also includes a Security Community Individual Representative to represent those not represented in other ways specifically. It’s also created some structures to help people work together: it’s established active working groups, identified (and posted) its values, and agreed on its technical vision.

But none of that matters unless they actually produce results. It’s still early, but they already have several accomplishments. They have released:

  • Secure Software Development Fundamentals courses. This set of 3 freely-available courses on the edX platform is for software developers to learn to develop secure software. It focuses on practical steps that any software developer can easily take, not theory or actions requiring unlimited resources.  Developers can also pay a fee to take tests to attempt to earn certificates to prove they understand the material.
  • Security Scorecards. This auto-generates a “security score” for open source projects to help users as they decide the trust, risk, and security posture for their use case.
  • Criticality Score. This project auto-generates a criticality score for open source projects based on a number of parameters. The goal is to better understand the most critical open source projects the world depends on.  
  • Security metrics dashboard. This early-release work provides a dashboard of security and sustainment information about OSS projects by combining the Security ScoreCards, CII Best Practices, and other data sources.
  • OpenSSF CVE Benchmark. This benchmark consists of vulnerable code and metadata for over 200 historical JavaScript/TypeScript vulnerabilities (CVEs). This will help security teams evaluate different security tools on the market by enabling teams to determine false positive and false negative rates with real codebases instead of synthetic test code.
  • OWASP Security Knowledge Framework (SKF). In collaboration with OWASP, this work is a knowledge base that includes projects with checklists and best practice code examples in multiple programming languages. It includes training materials for developers on how to write secure code in specific languages and security labs for hands-on work.
  • Report on the 2020 FOSS Contributor Survey, The OpenSSF and the Laboratory for Innovation Science at Harvard (LISH) released a report that details the findings of a contributor survey to study and identify ways to improve OSS security and sustainability. There were nearly 1,200 respondents.

The existing CII Best Practices badge project has also been folded into the OpenSSF and continues to be improved. The project now has more Chinese translators, a new ongoing Swahili translation, and various small refinements that clarify the badging requirements.

The November 2020 OpenSSF Town Hall discussed the OpenSSF’s ongoing work. The OpenSSF currently has the following working groups:

  • Vulnerability Disclosures
  • Security Tooling
  • Security Best Practices
  • Identifying Security Threats to Open Source Projects (focusing on a metrics dashboard)
  • Securing Critical Projects
  • Digital Identity Attestation

Future potential work, other than continuously improving work already released, includes:

  • Identifying overlapping and related security requirements in various specifications to reduce duplicate effort. This is to be developed in collaboration with OWASP as lead and is termed the Common Requirements Enumeration (CRE). The CRE is to “link sections of standard[s] and guidelines to each other, using a mutual topic identifier, enabling standard and scheme makers to work efficiently, enabling standard users to find the information they need, and attaining a shared understanding in the industry of what cyber security is.” [Source: “Common Requirements Enumeration”]
  • Establishing a website for no-install access to a security metrics OSS dashboard. Again, this will provide a single view of data from multiple data sources, including the Security Scorecards and CII Best Practices.
  • Developing improved identification of critical OSS projects. Harvard and the LF have previously worked to identify critical OSS projects. In the coming year, they will refine their approaches and add new data sources to identify critical OSS projects better.
  • Funding specific critical OSS projects to improve their security. The expectation is that this will focus on critical OSS projects that are not otherwise being adequately funded and will work to improve their overall sustainability.
  • Identifying and implementing improved, simplified techniques for digitally signing commits and verifying those identity attestations.

As with all Linux Foundation projects, the work by the OpenSSF is decided by its participants. If you are interested in the security of the OSS we all depend on, check out the OpenSSF and participate in some way. The best way to get involved is to attend the working group meetings — they are usually every other week and very casual. By working together we can make a difference. For more information, see

David A. Wheeler, Director of Open Source Supply Chain Security at the Linux Foundation

Bruce Schneier reconsiders the definition of trust in his keynote presentation from the recent Hyperledger Global Forum.

Blockchains have to be trusted in order for them to succeed, and public blockchains can cause problems you may not think about, according to Bruce Schneier, a fellow and lecturer at the Harvard Kennedy School, in his keynote address at December’s Hyperledger Global Forum on “Security, Trust and Blockchain.”

Schneier began his talk by citing a quote from Bitcoin’s anonymous developer, Satoshi Nakamoto, who said “We have proposed a system for electronic transaction without relying on trust.”

“That’s just not true,’’ Schneier said. “Bitcoin is not a system that doesn’t rely on trust.” It eliminates certain trust intermediaries, but you have to somehow trust Bitcoin, he noted. Generally speaking, the Bitcoin system changes the nature of trust.

Schneier called himself a big fan of “systems thinking,” which is what the issue boils down to, he said. This is something that is in too short supply in the tech world right now,’’ he maintained, and “we need a lot more of it.”

Trust relationships

Schneier’s talk focused on the data structures and protocols that make up a public blockchain. He called private blockchains “100 percent uninteresting,” explaining that they’re easy to create and secure, they don’t need any special properties, and they’ve been around for years.

Public blockchains are what’s new, he noted. They have three elements that make them work:

  • The ledger, which is the record of what happened and in what order
  • The consensus algorithm, which ensures all copies of the ledger are the same
  • The token, which is the currency

All the pieces fit together as a single system, and whether they can achieve anything gets back to the issue of trust, he said.

When he reads some of the comments of blockchain enthusiasts, such as “in code we trust,” “in math we trust,” and “in crypto we trust,” Schneier believes they have “an unnaturally narrow definition of trust.”

Trust as a verification mechanism is true, but you cannot replace trust with verification, he stated. For example, Schneier recounted waking up in his hotel room and trusting that the keys worked, naturally trusting the people who prepared his breakfast, and trusting that all the people he encountered on his way to the forum would not attack him.

“Trust is essential to society,’’ he said. “Humans as a species, are very trusting.” And, he continued, “The fact that we don’t think about it most of the time is a measure that trust works.”

Trust architectures

Schneier cited the book, Blockchain and The New Architecture of Trust, by Kevin Werbach, in which the author outlines the following four different trust architectures:

  • Peer-to-peer trust
  • Leviathan trust, which is institutional and involves contracts
  • Intermediary trust, like PayPal or credit cards that make a transaction work
  • Distributed trust, which is what blockchain enables — an emergent trust in the system without any individuals in the system trusting each other

“Blockchain shifts trust in people and institutions to trust in technology,” Schneier said. This means having to trust the cryptography, the software, the computers, the network, and the people who are making all of this work, he said. Along the way there are a lot of single points of failure, and if a blockchain gets hacked or you forget your credentials, you lose your money.

It comes down to the question of who you would rather trust: a human legal system or the details of computer code? Schneier said that, in a lot of ways, trusting technology is a lot harder than trusting people. Institutional trust is still needed, he said, because you still need people to be responsible for these systems.

Bitcoin might theoretically be based on distributed trust, “but practically, that’s just not true.” You have to trust the wallets and the exchanges, and there’s not many of either, as well as the software and the operating systems and computers that everything the blockchain runs on, he said.

“If you think about the attacks on bitcoin, this is where they are – they don’t go against the math, they go against the computer science.” There is always a need for governance outside the system, and a need to override the rules and make changes when necessary, he stressed.

Blockchain systems will always have to exist with other more conventional systems and Bitcoin will always need to interoperate with the rest of the financial world, he said. “That interface, with its laws and norms, often requires breaking the trust architecture of the blockchain system.” This means you can’t have a Bitcoin system where transactions clear immediately work with a credit card system where transactions clear in three days, he said.

A key feature of trust is that if the transaction goes bad or if your credentials are stolen, you get your money back, Schneier said. At the same time, trust is expensive. The reason people don’t use Bitcoin is because they don’t trust it, not because of the cryptography or the protocols, he maintained.

Human element

“A currency that is volatile is not particularly trustworthy,’’ he said. “That’s the human way of looking at trust.” Ethereum is an interesting example of how trust is working. “The fact that we have hard forks means we still need trusted people. This trust is a lot more complicated than transaction verification.” People will choose Bitcoin and an exchange or wallet based on reputation, he said, whether it’s something they read or a recommendation from a friend.

He concluded his talk by noting that trust is much more social; a human thing.

“So truly understanding this requires systems thinking. I really want everybody who designs and implements blockchains to understand the systems they’re working in,” Schneier said, not just the technology aspect, but the social parts and how they work. He suggested people start by asking whether they need a public blockchain?

“I think the answer is almost certainly no, and by this I’m answering the security question, not the marketing question,’’ he said. “Blockchains likely don’t solve the security problems you think they solve,” and they cause other problems you don’t think about, like inefficiencies, especially scaling. Schneier said there are almost always simpler and better ways to achieve the same security properties.

He advised the audience to look at the trust architecture and whether the blockchain “will change it in any meaningful way or does it just shift it around to no real effect?” He also asked them to think about whether the blockchain replaces trust verification and what aspects of trust does it try to fix and fail?

“Does it strengthen existing trust relationships, or does it go against them? Are the trust intermediaries of the new architecture better or worse than the old arch? How can trust be abused in the new system?” he said. “Is it better or worse than the old system and, lastly, what would the same system look like if it didn’t use blockchain?”

In most cases, Schneier said, his guess is that people will choose solutions that don’t use public blockchains because of all the problems they bring. “I’m not saying that they’re useless,” he added, “but I have yet to find an example where the things they do are worth the problems they bring.”

Watch the entire presentation below:

Other session recordings can be found on the Hyperledger YouTube channel.

Open Source Summit

Greg Kroah-Hartman talks about the importance of community interaction, and the upcoming Open Source Summit.

People might not think about the Linux kernel all that much when talking about containers, serverless, and other hot technologies, but none of them would be possible without Linux as a solid base to build on, says Greg Kroah-Hartman.  He should know. Kroah-Hartman maintains the stable branch of the Linux kernel along with several subsystems.  He is also co-author of the Linux Kernel Development Report, a Fellow at The Linux Foundation, and he serves on the program committee for Open Source Summit.

Greg Kroah-Hartman (right) talks about the upcoming Open Source Summit. (Image copyright: Swapnil Bhartiya)

In this article, we talk with Kroah-Hartman about his long involvement with Linux, the importance of community interaction, and the upcoming Open Source Summit.

The Linux Foundation: New technologies (cloud, containers, machine learning, serverless) are popping up on weekly basis, what’s the importance of Linux in the changing landscape?

Greg K-H: There’s the old joke, “What’s a cloud made of? Linux servers.” That is truer than most people realize. All of those things you mention rely on Linux as a base technology to build on top of.  So while people might not think about “Linux the kernel” all that much when talking about containers, serverless and the other “buzzwords of the day,” none of them would be possible without Linux being there to ensure that there is a rock-solid base for everyone to build on top of.  

The goal of an operating system is to provide a computing platform to userspace that looks the same no matter what hardware it runs on top of.  Because of this, people can build these other applications and not care if they are running it locally on a Raspberry Pi or in a cloud on a shared giant PowerPC cluster as everywhere the application API is the same.

So, Linux is essential for all of these new technologies to work properly and scale and move to different places as needed.  Without it, getting any of those things working would be a much more difficult task.

LF: You have been involved with Linux for a very long time. Has your role changed within the community? You seem to focus a lot on security these days.

Greg K-H: I originally started out as a driver writer, then helped write the security layer in the kernel many many years ago.  From there I started to maintain the USB subsystem and then co-created the driver model. From there I ended up taking over more driver subsystems and when the idea for the stable kernel releases happened back in 2005, I was one of the developers who volunteered for that.

So for the past 13 years, I’ve been doing pretty much the same thing, not much has changed since then except the increased number of stable trees I maintain at the same time to try to keep devices in the wild more secure.

I’ve been part of the kernel security team I think since it was started back in the early 2000’s but that role is more of a “find who to point the bug at” type of thing.  The kernel security team is there to help take security problem reports and route them to the correct developer who maintains or knows that part of the kernel best.  The team has grown over the years as we have added the people that ended up getting called on the most to reduce the latency between reporting a bug and getting it fixed.

LF: We agree that Linux is being created by people all over the map, but once in a while it’s great to meet people in person. So, what role does Open Source Summit play in bringing these people together?

Greg K-H: Because open source projects are all developed by people who work for different companies and who live in different places, it’s important to get together when ever possible to actually meet the people behind the email if at all possible.  Development is an interaction that depends on trust, if I accept patches from you, then I am now responsible for those changes as well. If you disappear, I am on the hook for them, so either I need to ensure they are correct, or even better, I can know that you will be around to fix the code if there is a problem.  By meeting people directly, you can establish a face behind the email to help smooth over any potential disagreements that can easily happen due to the lack of “tone” in online communication.

It’s also great to meet developers of other projects to hear of ways they are abusing your project to get it to bend to their will, or learn of problems they are having that you did not know about.  Or just learn about new things that are being developed in totally different development groups.  The huge range of talks at a conference like this makes it easy to pick up on what is happening in a huge range of different developer communities easily.

LF: You obviously meet a lot of people during the event. Have you ever come across an incident where someone ended up becoming a contributor or maintainer because of the exposure such an event provided?

Greg K-H: At one of the OSS conferences last year, I met a college student who was attending the conference for the first time.  They mentioned that they were looking for any project ideas that someone with their skill level could help out with. At a talk later that day, a new idea for how to unify a specific subsystem of the kernel came up and how it was going “just take a bunch of grunt work” to accomplish.  Later that night, at the evening event, I saw the student again and mentioned the project to them and pointed them at the developer who asked for the help. They went off to talk in the corner about the specifics that would be needed to be done.

A few weeks later, a lot of patches started coming from the student and after a few rounds of review, were accepted by the maintainer.  More patches followed and eventually the majority of the work was done, which was great to see, the kernel really benefited from their contribution.

This year, I ran into the student again at another OSS conference and asked them what they were doing now.  Turns out they had gotten a job offer and were working for a Linux kernel company doing development on new products during their summer break.  Without that first interaction, meeting the developers directly that worked on the subsystem that needed the help, getting a job like that would have been much more difficult.

So, while I’m not saying that everyone who attends one of these types of conferences will instantly get a job, you will interact with developers who know what needs to be done in different areas of their open source projects.  And from there it is almost an easy jump to getting solid employment with one of the hundreds of companies that rely on these projects for their business.

LF: Are you also giving any talks at Open Source Summit?

Greg K-H:  I’m giving a talk about the Spectre and Meltdown problems that have happened this year.  It is a very high-level overview, going into the basics of what they are, and describing when the many different variants were announced and fixed in Linux.  This is a new security type of problem that is going to be with us for a very long time and I give some good tips on how to stay on top of the problem and ensure that your machines are safe.

Sign up to receive updates on Open Source Summit North America:


A recent webinar, Get Involved: How to Get Started with Hyperledger Projects, focuses particularly on making Hyperledger projects more approachable.

Few technology trends have as much momentum as blockchain — which is now impacting industries from banking to healthcare. The Linux Foundation’s Hyperledger Project is helping drive this momentum as well as providing leadership around this complex technology, and many people are interested in getting involved. In fact, Hyperledger nearly doubled its membership in 2017 and recently added Deutsche Bank as a new member.  

A recent webinar, Get Involved: How to Get Started with Hyperledger Projects, focuses particularly on making Hyperledger projects more approachable. The free webinar is now available online and is hosted by David Boswell, Director of Ecosystem at Hyperledger and Tracy Kuhrt, Community Architect.

Hyperledger Fabric, Sawtooth, and Iroha

Hyperledger currently consists of 10 open source projects, seven that are in incubation and three that have graduated to active status.  “The three active projects are Hyperledger Fabric, Hyperledger Sawtooth, and Hyperledger Iroha,” said Boswell.

Fabric is a platform for distributed ledger solutions, underpinned by a modular architecture. “One of the major features that Hyperledger Fabric has is a concept called channels. Channels are a private sub-network of communication between two or more specific network members for the purpose of conducting private and confidential transactions.”

According to the website, Hyperledger Iroha is designed to be easy to incorporate into infrastructural projects requiring distributed ledger technology. It features simple construction, with emphasis on mobile application development.

Hyperledger Sawtooth is a modular platform for building, deploying, and running distributed ledgers, and you can find out more about it in this post.  One of the main attractions Sawtooth offers is “dynamic consensus.”

“This allows you to change the consensus mechanism that’s being used on the fly via a transaction, and this transaction, like other transactions, gets stored on the blockchain,” said Boswell. “With Hyperledger Sawtooth, there are ways to explicitly let the network know that you are making changes to the same piece of information across multiple transactions. By being able to provide this explicit knowledge, users are able to update the same piece of information within the same block.”

Sawtooth can also facilitate smart contracts. “You can write your smart contract in a number of different languages, including C++ JavaScript, Go, Java, and Python,” said Boswell. Demonstrations and resources for Sawtooth are available here:

How to contribute to Hyperledger projects

In the webinar, Kuhrt and Boswell explain how you can contribute to Hyperledger projects. “All of our working groups are open to anyone that wants to participate, including the training and education working group,” said Kuhrt. “This particular working group meets on a biweekly basis and is currently working to determine where it can have the greatest impact. I think this is really a great place to get in at the start of something happening.”

What are the first steps if you want to make actual project contributions? “The first step is to explore the contributing guide for a project,” said Kuhrt. “All open source projects have a document at the root of their source directory called contributing, and these guides are really to help you find information about how you’d file a bug, what kind of coding standards are followed by the project, where to find the code, where to look for issues that you might start working with, and requirements for pull requests.”

Now is a great time to learn about Hyperledger and blockchain technology, and you can find out more in the next webinar coming up May 31:

Blockchain and the enterprise. But what about security?

Date: Thursday, May 31, 2018
Time: 10:00 AM Pacific Daylight Time

This talk will leave you with understanding how Blockchain does, and does not, change the security requirements for your enterprise. Sign up now!

Submit to Speak at Hyperledger Global Forum

Hyperledger Global Forum will offer the unique opportunity for more than 1,200 users and contributors of Hyperledger projects from across the globe to meet, align, plan, and hack together in-person. Share your expertise and speak at Hyperledger Global Forum! We are accepting proposals through Sunday, July 1, 2018. Submit Now >>

software security

Software security requires discipline and diligence, said Mårten Mickos, speaking at the Open Source Leadership Summit.

Achieving effective security takes constant discipline and effort on everyone’s part – not just one team or group within a company. That was Mårten Mickos’s message in his keynote speech appropriately titled, “Security is Everyone’s Responsibility,” at The Linux Foundation’s recent Open Source Leadership Summit (OSLS).  

Mickos, CEO of HackerOne, which he described as a “hacker-powered security company,” told the audience that $100 billion has been spent on cybersecurity, yet, “Half of the money is wasted. We’ve been buying hardware and software and machines and walls and all kinds of stuff thinking that that technology and [those] products will make us secure. But that’s not true.”

Even if you ply your network with hardware to create a perimeter around it, it won’t make your organization any more secure, Mickos said. The answer is much simpler, he maintained, and the magic bullet is sharing.

“You share the defense, you share information, you work together,’’ he said. “You can’t have secure software if just some of your software engineers are in charge of security. You can’t just delegate it or relegate it to a security team. If you do that it won’t happen.”

Mickos likened that approach to the 1990s, when companies had quality managers and people got ISO certifications. “It didn’t help. It reduced quality in the companies, because people felt that quality now was the job of somebody else, not of you.”


Software security, Mickos said, “only happens when we’re very disciplined.”

Mickos’ company has 160,000 contributors, including security researchers, ethical hackers and “white hats;” people who have signed up to find flaws in software, he said.  Security vulnerabilities can emanate from situations even when there are no bugs, he noted, adding that HackerOne hacked the U.S. Air Force in eight minutes.

“We found 200 vulnerabilities in the Air Force’s systems, 20 of those were found by Jack Cable, a 17-year-old high school student from Chicago, Ill.,” he said.

HackerOne has fixed over 65,000 security vulnerabilities, Mickos claimed. “So that has removed a lot of holes where criminals could have entered. But there are still tens of millions of vulnerabilities; no one knows the exact number. But if we deploy 100 billion lines of code every year … there’s a lot of security to look after.”

Pooled Defense

In his speech, Mickos promoted the notion of a “pooled defense;” the idea that “the number of defenders is far larger than the number of bad guys.’ He said there are far more white hats in the world than there are cyber criminals or “black hats.”

Cyber threats are often characterized as being asymmetric, he said, in the sense that one single criminal attacker can cause a lot of harm — so much so that a company needs 100 people to defend against it.

“If companies can get together and pool their defense, you … suddenly you have 10 times the power of the attackers,’’ he said. “If you share information, share the defense, share best practices, and share the act of responding to threats, then you overcome the asymmetry and you turn it around.”

It takes discipline and diligence, Mickos said, recalling how Equifax had “so many failures and acts of negligence or … omissions in the way they handle security,” and that “it was one single software vulnerability that led to the data breach in their systems.” Meanwhile, he added, “There’s nobody here who has a software system with just one vulnerability.”

While people often complain about long passwords or having to use multi-factor authentication because it is so time-consuming, they had better get used to it, he cautioned.

“Security doesn’t come for free. The only thing that … acts against these threats is the discipline and diligence [and] remembering long passwords,’’ Mickos said. “Even when somebody invents a method where we don’t need passwords anymore, you will be asked to do something else which is burdensome and every day, and where you’re not allowed to miss it one single time.”

Mickos also had a message for educational institutions: “Don’t call it computer science and software engineering unless there’s security in it. Today, you can graduate in CS without taking a single course in security.” He said he didn’t pay attention to the importance of security when he was in college, but different times call for a different approach. Today, security “has to become part of everything we do.”

We Can Turn the Ship

When everyone recognizes that security is a shared responsibility, he stressed, “the ship will turn. It’s a big ship, so it turns slowly, but it will turn, and we will get to a state that is similar to what we have with airline safety or hospital hygiene or … automotive safety, where today it all works. But it works because we do it together and we jointly take responsibility for it.”

Watch the complete presentation below:

linux kernel development

Part of the ongoing Linux development work involves hardening the kernel against attack.

Security is paramount these days for any computer system, including those running on Linux. Thus, part of the ongoing Linux development work involves hardening the kernel against attack, according to the recent Linux Kernel Development Report.

This work, according to report authors Jonathan Corbet and Greg Kroah-Hartman, involves the addition of several new technologies, many of which have their origin in the grsecurity and PaX patch sets. “New hardening features include virtually mapped kernel stacks, the use of the GCC plugin mechanism for structure-layout randomization, the hardened usercopy mechanism, and a new reference-count mechanism that detects and defuses reference-count overflows. Each of these features makes the kernel more resistant to attack,” the report states.

Linux kernel

Kees Cook

In this series, we are highlighting some of the hard-working developers who contribute to the Linux kernel. Here, Kees Cook, Software Engineer at Google, answers a few questions about his work.

Linux Foundation: What role do you play in the community and what subsystem(s) do you work on?

Kees Cook: Recently, I organized the Kernel Self-Protection Project (KSPP), which has helped focus lots of other developers to work together to harden the kernel against attack. I’m also the maintainer of seccomp, pstore, LKDTM, and gcc-plugin subsystems, and a co-maintainer of sysctl.

Linux Foundation: What have you been working on this year?

Cook: I’ve been focused on KSPP work. I’ve assisted many other developers by helping port, develop, test, and shepherd things like hardened usercopy, gcc plugins, KASLR improvements, PAN emulation, refcount_t conversion, and stack protector improvements.

Linux Foundation: What do you think the kernel community needs to work on in the upcoming year?

Cook: I think we’ve got a lot of work ahead in standardizing the definitions of syscalls (to help run-time checkers), and continuing to identify and eliminate error-prone code patterns (to avoid common flaws). Doing these kinds of tree-wide changes continues to be quite a challenge for contributors because the kernel development model tends to focus on per-subsystem development.

Linux Foundation: Why do you contribute to the Linux kernel?

Cook: I’ve always loved working with low-level software, close to the hardware boundary. I love the challenges it presents. Additionally, since Linux is used in all corners of the world, it’s hard to find a better project to contribute to that has such an impact on so many people’s lives.

You can learn more about the Linux kernel development process and read more developer profiles in the full report. Download the 2017 Linux Kernel Development Report now.

A guest blog post by Mike Goodwin.

What is threat modeling?

Application threat modeling is a structured approach to identifying ways that an adversary might try to attack an application and then designing mitigations to prevent, detect or reduce the impact of those attacks. The description of an application’s threat model is identified as one of the criteria for the Linux CII Best Practises Silver badge.

Why threat modeling?

It is well established that defense-in-depth is a key principle for network security and the same is true for application security. But although most application developers will intuitively understand this as a concept, it can be hard to put it into practice. After many years and sleepless nights, worrying and fretting about application security, one thing I have learned is that threat modeling is an exceptionally powerful technique for building defense-in-depth into an application design. This is what first attracted me to threat modeling. It is also great for identifying security flaws at design time where they are cheap and easy to correct. These kinds of flaws are often subtle and hard to detect by traditional testing approaches, especially if they are buried in the innards of your application.

Three stages of threat modeling

There are several ways of doing threat modeling ranging from formal methodologies with nice acronyms (e.g. PASTA) through card games (e.g. OWASP Cornucopia) to informal whiteboard sessions. Generally though, the technique has three core stages:

Decompose your application – This is almost always done using some kind of diagram. I have seen successful threat modeling done using many types of diagrams from UML sequence diagrams to informal architecture sketches. Whatever format you choose, it is important that the diagram shows how different internal components of your application and external users/systems interact to deliver its functionality. My preferred type of diagram is a Data Flow Diagram with trust boundaries:

Identify threats – In this stage, the threat modeling team ask questions about the component parts of the application and (very importantly) the interactions or data flows between them to guess how someone might try to attack it. The answers to these questions are the threats. Typical questions and resulting threats are:

Question Threat
What assumptions is this process making about incoming data? What if they are wrong? An attacker could send a request pretending to be another person and access that person’s data.
What could an attacker do to this message queue? An attacker could place a poison message on the queue causing the receiving process to crash.
Where might an attacker tamper with the data in the application? An attacker could modify an account number in the database to divert payment to their own account.

Design mitigations – Once some threats have been identified the team designs ways to block, avoid or minimize the threats. Some threats may have more than one mitigation. Some mitigations might be preventative and some might be detective. The team could choose to accept some low-risk threats without mitigations. Of course, some mitigations imply design changes, so the threat model diagram might have to be revisited.

Threat Mitigation
An attacker could send a request pretending to be another person and access that person’s data. Identify the requestor using a session cookie and apply authorization logic.
An attacker could place a poison message on the queue causing the receiving process to crash. Digitally sign message on the queue and validate their signature before processing.
Maintain a retry count on message and discard them after three retries.
An attacker could modify an account number in the database to divert payment to their own account. Preventative: Restrict access to the database using a firewall.
Detective: Log all changes to bank account numbers and audit the changes.

OWASP Threat Dragon

Threat modeling can be usefully done with a pen, whiteboard and one or more security-aware people who understand how their application is built, and this is MUCH better than not threat modeling at all. However, to do it effectively with multiple people and multiple project iterations you need a tool. Commercial tools are available, and Microsoft provides a free tool for Windows only, but established, free, open-source, cross-platform tools are non-existent. OWASP Threat Dragon aims to fill this gap. The aims of the project are:

  • Great UX – Using Threat Dragon should be simple, engaging and fun
  • A powerful threat/mitigation rule engine – This will lower the barrier to entry for teams and encourage non-specialists to contribute
  • Integration with other development lifecycle tools – This will ensure that models slot easily into the developer workflows and remain relevant as the project evolves
  • To always be free, open-source (like all OWASP projects) and cross-platform. The full source code is available on GitHub

The tool comes in two variants:

End-user documentation is available for both variants and, most importantly, it has a cute logo called Cupcakes…

Threat Dragon is an OWASP Incubator Project – so it is still early stage but it can already support effective threat modeling. The near-term roadmap for the tool is to:

  • Achieve a Linux CII Best Practices badge for the project
  • Implement the threat/mitigation rule engine
  • Continue to evolve the usability of the tool based on real-world feedback from users
  • Establish a sustainable hosting model for the web application

If you want to harden your application designs you should definitely give threat modeling a try. If you want a tool to help you, try OWASP Threat Dragon! All feedback, comments, issue reports and pull requests are very welcome.

About the author: Mike Goodwin is a full-time security professional at the Sage Group where he leads the team responsible for product security. Most of his spare time is spent working on Threat Dragon or co-leading his local OWASP chapter.

This article originally appeared on the Core Infrastructure Initiative website.

OS Summit keynotes

Watch keynotes and technical sessions from OS Summit and ELC Europe here.If you weren’t able to attend Open Source Summit and Embedded Linux Conference (ELC) Europe last week, don’t worry! We’ve recorded keynote presentations from both events and all the technical sessions from ELC Europe to share with you here.

Check out the on-stage conversation with Linus Torvalds and VMware’s Dirk Hohndel, opening remarks from The Linux Foundation’s Executive Director Jim Zemlin, and a special presentation from 11-year-old CyberShaolin founder Reuben Paul. You can watch these and other ELC and OS Summit keynotes below for insight into open source collaboration, community and technical expertise on containers, cloud computing, embedded Linux, Linux kernel, networking, and much more.

And, you can watch all 55+ technical sessions from Embedded Linux Conference here.

Riyaz Faizullabhoy, Docker Security Engineer, today announced on stage at Open Source Summit Europe, that the Cloud Native Computing Foundation (CNCF) Technical Oversight Committee (TOC) has voted Notary in as our 13th hosted project and TUF in as our 14th hosted project.

“With every project presented to the CNCF, the TOC evaluates what that project provides to the cloud native ecosystem,” said Chris Aniszczyk, COO of Cloud Native Computing Foundation. “Notary and the TUF specification address a key challenge for enterprises working with containers by providing a solution for trusted, cross-platform delivery of content. We are excited to have these projects come in as one collective contribution to CNCF and look forward to cultivating their communities.”

Notary Based on The Update Framework (TUF) specification

Docker Platform including Enterprise Edition and Community Edition, Moby Project, Huawei, Motorola Solutions, VMWare, LinuxKit, Quay, and Kubernetes have all integrated Notary/TUF.

Originally created by Docker in June 2015, Notary is based on The Update Framework (TUF) specification, a secure general design for the problem of software distribution and updates. TUF helps developers to secure new or existing software update systems, which are often found to be vulnerable to many known attacks. TUF addresses this widespread problem by providing a comprehensive, flexible security framework that developers can integrate with any software update system.

Notary is one of the industry’s most mature implementations of the TUF specification and its Go implementation is used today to provide robust security for container image updates, even in the face of a registry compromise. Notary takes care of the operations necessary to create, manage, and distribute the metadata needed to ensure the integrity and freshness of user content. Notary/TUF provides both a client, and a pair of server applications to host signed metadata and perform limited online signing functions.

Image 1: Diagram illustrates the interactions between the Notary client, server, and signer

It is also beginning to gain traction outside the container ecosystem as platforms like Kolide use Notary to secure distribution of osquery through their auto-updater.

“In a developer’s workflow, security can often be an afterthought; however, every piece of deployed code from the OS to the application should be signed. Notary establishes strong trust guarantees to prevent malicious content from being injected into the workflow processes,” said David Lawrence, Senior Software Engineer at Docker. “Notary is a widely used implementation in the container space. By joining CNCF, we hope Notary will be more widely adopted and different use cases will emerge.”

Notary joins the following CNCF projects Kubernetes, Prometheus, OpenTracing, Fluentd, linkerd, gRPC, CoreDNS, containerd, rkt, CNI, Envoy, and Jaeger.

Use Case Examples of Notary:

  • Docker uses Notary to implement Docker Content Trust and all of the docker trust subcommands.
  • Quay is using Notary as a library, wrapping it and extending it to suit their needs. For Quay, Notary is flexible rather than single-purpose.
  • CloudFlare’s PAL tool uses Notary for container identity, allowing one to associate metadata such as secrets to running containers in a verifiable manner.
  • LinuxKit is using Notary to distribute its kernels and system packages.

Notable Notary Milestones:

  • 865 GitHub stars, 156 forks
  • 45 contributors
  • 8 maintainers from 3 companies; Docker, CoreOS, Huawei
  • 2600+ commits, 34 releases


TUF (The Update Framework) is an open source specification that was written in 2009 by Professor Justin Cappos and developed further by members of the Professor Cappos’s Secure Systems Lab at NYU’s Tandon School of Engineering.

TUF is designed to work as part of a larger software distribution framework and provides resilience to key or server compromises. Using a variety of cryptographic keys for content signing and verification, TUF allows security to remain as strong as is practical against a variety of different classes of attacks.

TUF is used in production by Docker, LEAP, App Container, Flynn, OTAInfo, ATS Solutions, and VMware.

“In addition to focusing on security, one of our primary goals has been to operate securely within the workflow that groups already use on their repositories,” said Professor Cappos. “We have learned a tremendous amount by working with Docker, CoreOS, OCaml, Python, Rust, and automotive vendors to tune TUF to work better in their environments.”

TUF has a variety of use cases beyond containers. For example, several different companies in the automotive industry have integrated a TUF-variant called Uptane, with more integrations underway. As a result, Uptane was recently named one of Popular Science’s Top 100 Technologies of the Year. There is also a lot of momentum toward adoption by different programming language software repositories, including standardization by Python (PEP 458 and 480). TUF has also been security audited by multiple groups.

Notable TUF Milestones:

  • Open source since 2010
  • 517 GitHub stars, 74 forks
  • 27+ contributors from CoreOS, Docker, OCaml, Python, Rust (ATS Solutions) and Tor
  • 2700+ commits

As CNCF hosted projects, Notary and TUF will be part of a neutral community aligned with technical interests. The CNCF will also assist Notary and TUF with marketing and documentation efforts as well as help grow their communities.

“The inclusion of Notary and TUF into the CNCF is an important milestone as it is the first project to address concerns regarding the trusted delivery of content for containerized applications,” said Solomon Hykes, Founder and CTO at Docker and CNCF TOC project sponsor. “Notary is already at the heart of several security initiatives throughout the container ecosystem and with this donation, it will be even more accessible as a building block for broader community collaboration.”

For more on Notary, check out the release blog for Notary and Docker Content Trust, as well as Docker’s Notary doc pages and read Getting Started with Notary and Understand the Notary service architecture. For more on TUF, check out The Updated Framework page and watch Professor Cappos in this video and this conference presentation video.

Stay up to date on all CNCF happenings by signing up for our monthly newsletter.

We’re excited that support for getting and managing TLS certificates via the ACME protocol is coming to the Apache HTTP Server Project (httpd). ACME is the protocol used by Let’s Encrypt, and hopefully other Certificate Authorities in the future. We anticipate this feature will significantly aid the adoption of HTTPS for new and existing websites.

We created Let’s Encrypt in order to make getting and managing TLS certificates as simple as possible. For Let’s Encrypt subscribers, this usually means obtaining an ACME client and executing some simple commands. Ultimately though, we’d like for most Let’s Encrypt subscribers to have ACME clients built in to their server software so that obtaining an additional piece of software is not necessary. The less work people have to do to deploy HTTPS the better!

ACME support being built in to one of the world’s most popular Web servers, Apache httpd, is great because it means that deploying HTTPS will be even easier for millions of websites. It’s a huge step towards delivering the ideal certificate issuance and management experience to as many people as possible.

The Apache httpd ACME module is called mod_md. It’s currently in the development version of httpd and a plan is being formulated to backport it to an httpd 2.4.x stable release. The mod_md code is also available on GitHub.

It’s also worth mentioning that the development version of Apache httpd now includes support for an SSLPolicy directive. Properly configuring TLS has traditionally involved making a large number of complex choices. With the SSLPolicy directive, admins simply select a modern, intermediate, or old TLS configuration, and sensible choices will be made for them.

Development of mod_md and the SSLPolicy directive has been funded by Mozilla and carried out primarily by Stefan Eissing of greenbytes. Thank you Mozilla and Stefan!

Let’s Encrypt is currently providing certificates for more than 55 million websites. We look forward to being able to serve even more websites as efforts like this make deploying HTTPS with Let’s Encrypt even easier. If you’re as excited about the potential for a 100% HTTPS Web as we are, please consider getting involvedmaking a donation, or sponsoring Let’s Encrypt.