Posts

Riyaz Faizullabhoy, Docker Security Engineer, today announced on stage at Open Source Summit Europe, that the Cloud Native Computing Foundation (CNCF) Technical Oversight Committee (TOC) has voted Notary in as our 13th hosted project and TUF in as our 14th hosted project.

“With every project presented to the CNCF, the TOC evaluates what that project provides to the cloud native ecosystem,” said Chris Aniszczyk, COO of Cloud Native Computing Foundation. “Notary and the TUF specification address a key challenge for enterprises working with containers by providing a solution for trusted, cross-platform delivery of content. We are excited to have these projects come in as one collective contribution to CNCF and look forward to cultivating their communities.”

Notary Based on The Update Framework (TUF) specification

Docker Platform including Enterprise Edition and Community Edition, Moby Project, Huawei, Motorola Solutions, VMWare, LinuxKit, Quay, and Kubernetes have all integrated Notary/TUF.

Originally created by Docker in June 2015, Notary is based on The Update Framework (TUF) specification, a secure general design for the problem of software distribution and updates. TUF helps developers to secure new or existing software update systems, which are often found to be vulnerable to many known attacks. TUF addresses this widespread problem by providing a comprehensive, flexible security framework that developers can integrate with any software update system.

Notary is one of the industry’s most mature implementations of the TUF specification and its Go implementation is used today to provide robust security for container image updates, even in the face of a registry compromise. Notary takes care of the operations necessary to create, manage, and distribute the metadata needed to ensure the integrity and freshness of user content. Notary/TUF provides both a client, and a pair of server applications to host signed metadata and perform limited online signing functions.

Image 1: Diagram illustrates the interactions between the Notary client, server, and signer

It is also beginning to gain traction outside the container ecosystem as platforms like Kolide use Notary to secure distribution of osquery through their auto-updater.

“In a developer’s workflow, security can often be an afterthought; however, every piece of deployed code from the OS to the application should be signed. Notary establishes strong trust guarantees to prevent malicious content from being injected into the workflow processes,” said David Lawrence, Senior Software Engineer at Docker. “Notary is a widely used implementation in the container space. By joining CNCF, we hope Notary will be more widely adopted and different use cases will emerge.”

Notary joins the following CNCF projects Kubernetes, Prometheus, OpenTracing, Fluentd, linkerd, gRPC, CoreDNS, containerd, rkt, CNI, Envoy, and Jaeger.

Use Case Examples of Notary:

  • Docker uses Notary to implement Docker Content Trust and all of the docker trust subcommands.
  • Quay is using Notary as a library, wrapping it and extending it to suit their needs. For Quay, Notary is flexible rather than single-purpose.
  • CloudFlare’s PAL tool uses Notary for container identity, allowing one to associate metadata such as secrets to running containers in a verifiable manner.
  • LinuxKit is using Notary to distribute its kernels and system packages.

Notable Notary Milestones:

  • 865 GitHub stars, 156 forks
  • 45 contributors
  • 8 maintainers from 3 companies; Docker, CoreOS, Huawei
  • 2600+ commits, 34 releases

TUF

TUF (The Update Framework) is an open source specification that was written in 2009 by Professor Justin Cappos and developed further by members of the Professor Cappos’s Secure Systems Lab at NYU’s Tandon School of Engineering.

TUF is designed to work as part of a larger software distribution framework and provides resilience to key or server compromises. Using a variety of cryptographic keys for content signing and verification, TUF allows security to remain as strong as is practical against a variety of different classes of attacks.

TUF is used in production by Docker, LEAP, App Container, Flynn, OTAInfo, ATS Solutions, and VMware.

“In addition to focusing on security, one of our primary goals has been to operate securely within the workflow that groups already use on their repositories,” said Professor Cappos. “We have learned a tremendous amount by working with Docker, CoreOS, OCaml, Python, Rust, and automotive vendors to tune TUF to work better in their environments.”

TUF has a variety of use cases beyond containers. For example, several different companies in the automotive industry have integrated a TUF-variant called Uptane, with more integrations underway. As a result, Uptane was recently named one of Popular Science’s Top 100 Technologies of the Year. There is also a lot of momentum toward adoption by different programming language software repositories, including standardization by Python (PEP 458 and 480). TUF has also been security audited by multiple groups.

Notable TUF Milestones:

  • Open source since 2010
  • 517 GitHub stars, 74 forks
  • 27+ contributors from CoreOS, Docker, OCaml, Python, Rust (ATS Solutions) and Tor
  • 2700+ commits

As CNCF hosted projects, Notary and TUF will be part of a neutral community aligned with technical interests. The CNCF will also assist Notary and TUF with marketing and documentation efforts as well as help grow their communities.

“The inclusion of Notary and TUF into the CNCF is an important milestone as it is the first project to address concerns regarding the trusted delivery of content for containerized applications,” said Solomon Hykes, Founder and CTO at Docker and CNCF TOC project sponsor. “Notary is already at the heart of several security initiatives throughout the container ecosystem and with this donation, it will be even more accessible as a building block for broader community collaboration.”

For more on Notary, check out the release blog for Notary and Docker Content Trust, as well as Docker’s Notary doc pages and read Getting Started with Notary and Understand the Notary service architecture. For more on TUF, check out The Updated Framework page and watch Professor Cappos in this video and this conference presentation video.

Stay up to date on all CNCF happenings by signing up for our monthly newsletter.

 

Open Source Summit livestream

The Linux Foundation is pleased to offer free live video streaming of all keynote sessions at Open Source Summit and Embedded Linux Conference Europe, Oct. 23 to Oct. 25, 2017.

Join 2000 technologists and community members next week as they convene at Open Source Summit Europe and Embedded Linux Conference Europe in Prague. If you can’t be there in person, you can still take part, as The Linux Foundation is pleased to offer free live video streaming of all keynote sessions on Monday, Oct. 23 through Wednesday, Oct. 25, 2017.  So, you can watch the event keynotes presented by Google, Intel, and VMware, among others.

The livestream will begin on Monday, Oct. 23 at 9 a.m. CEST (Central European Summer Time). Sign up now! You can also follow our live event updates on Twitter with #OSSummit.

All keynotes will be broadcasted live, including talks by Keila Banks, 15-year-old Programmer, Web Designer, and Technologist with her father Philip Banks; Mitchell Hashimoto, Founder, HashiCorp Founder of HashiCorp and Creator of Vagrant, Packer, Serf, Consul, Terraform, Vault and Nomad; Jan Kizska, Senior Key Expert, Siemens AG; Dirk Hohndel, VP & Chief Open Source Officer, VMware in a Conversation with Linux and Git Creator Linus Torvalds; Michael Dolan, Vice President of Strategic Programs & The Linux Foundation; and Jono Bacon, Community/Developer Strategy Consultant and Author.

Other featured conference keynotes include:

  • Neha Narkhede — Co-Founder & CTO of Confluent will discuss Apache Kafka and the Rise of the Streaming Platform
  • Reuben Paul — 11-year-old Hacker, CyberShaolin Founder and cybersecurity ambassador will talk about how Hacking is Child’s Play
  • Arpit Joshipura — General Manager, Networking, The Linux Foundation who will discuss Open Source Networking and a Vision of Fully Automated Networks
  • Imad Sousou — Vice President and General Manager, Software & Services Group, Intel
  • Sarah Novotny — Head of Open Source Strategy for GCP, Google
  • And more

View the full schedule of keynotes.

And sign up now for the free live video stream.

Once you sign up to watch the event keynotes, you’ll be able to view the livestream on the same page. If you sign up prior to the livestream day/time, simply return to this page and you’ll be able to view.

Open Source Summit EU

Going to Open Source Summit? Check out some featured conference presentations and activities below.

Going to Open Source Summit EU in Prague? While you’re there, be sure stop by The Linux Foundation training booth for fun giveaways and a chance to win one of three Raspberry Pi kits.

Giveaways include The Linux Foundation branded webcam covers, The Linux Foundation projects’ stickers, Tux stickers, Linux.com stickers, as well as free ebooks: The SysAdmin’s Essential Guide to Linux Workstation Security, Practical GPL Compliance, and A Guide to Understanding OPNFV & NFV.

You can also enter the raffle for a chance to win a Raspberry Pi Kit. There will be 3 raffle winners: names will be drawn and prizes will be mailed on Nov. 2.

And, be sure to check out some featured conference presentations below, including how to deploy Kubernetes native applications, deploying and scaling microservices, opportunities for inclusion and collaboration, and how to build your open source career.

Session Highlights

  • Love What You Do, Everyday! – Zaheda Bhorat, Amazon Web Services
  • Detecting Performance Regressions In The Linux Kernel – Jan Kara, SUSE
  • Highway to Helm: Deploying Kubernetes Native Applications – Michelle Noorali, Microsoft
  • Deploying and Scaling Microservices with Docker and Kubernetes – Jérôme Petazzoni, Docker
  • printk() – The Most Useful Tool is Now Showing its Age – Steven Rostedt, VMWare
  • Every Day Opportunities for Inclusion and Collaboration – Nithya Ruff, Comcast

Activities

  • Technical Showcase
  • Real-Time Summit
  • Free Day with Prague tour from local students
  • KVM Forum
  • FOSSology – Hands On Training
  • Tracing Summit

The Cloud Native Computing Foundation will also a have booth at OSSEU. Get your pass to Open Source Summit Europe and stop by to learn more! Use discount OSSEULFM20 code for 20% off your all-access attendee pass.

Check out the full list of co-located events on the website and register now.

This week in Linux and open source headlines, ONAP leads the way in the automation trend, Mozilla launches new, open source speech recognition project, and more! Get up to speed with the handy Linux.com weekly digest!

1) With automation being one of the top virtualization trends of 2017, The Linux Foundation’s ONAP is credited with moving the industry forward

Top Five Virtualization Trends of 2017– RCRWireless

2) Mozilla has launched a new open source project speech recognition system that relies on online volunteers to submit voice samples and validate them.

Common Voice: Mozilla Is Creating An Open Source Speech Recognition System– Fossbytes

3)In addition to membership growth, EdgeX Foundry has launched a series of technical training sessions to help developers get up to speed on the project.

Linux’s EdgeX IoT Group Adds Members, Forms Governing Team– SDxCentral

4) Multicore Association announces availability of an enhanced implementation of its Multicore Task Management API (MTAPI.)

Open Source Tools Set to Help Parallel Programming of Multicores– ElectronicsWeekly.com

5) “OCI 1.0 will ensure consistency at the lowest levels of infrastructure, and push the container wars battlefront up the stack.”

OCI 1.0 Container Image Spec Finds Common Ground Among Open Source Foes– TechTarget

This week in Linux and open source, the ‘Big 4″ accounting firms are becoming power players in blockchain, Oracle expands open source container efforts, and more in this weekly digest!

1) The four largest accounting firms in the world are active members of the blockchain revolution– Including Deloitte, which joined the Hyperledger Project.

‘Big 4’ Accounting Firms Are Experimenting With Blockchain And Bitcoin– Nasdaq

2) Oracle to expand container efforts with three new open-source utilities to help improve container security.

Oracle Debuts Three New Open-Source Container Tools– eWeek

3) Hyperledger’s Indy “is all about giving identity owners independent control of their personal data and relationships.” Explains Doc Searls in his op-ed about the availability of Linux for all users.

Linux for Everyone–All 7.5 Billion of Us– LinuxJournal

4) Regarding commits is “probably, it’s the second biggest kernel release.”

Linux Kernel 4.12 Released — These Are The 5 Biggest Features– Fossbytes

5) WatchGuard CTO Corey Nachreiner explains that Linux attacks and malware are on the rise.

IoT Fuels Growth of Linux Malware– IoTInside

In this preview of Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation, we’ve covered Docker installation, introduced Docker Machine, performed basic Docker container and image operations, and looked at Dockerfiles and Docker Volumes.

This final article in the series looks at Docker Compose, which is a tool you can use to create multi-container applications with just one command. If you are using Docker for Mac or Windows, or you install the Docker Toolbox, then Docker Compose will be available by default. If not, you can download it manually.

To try out WordPress, for example, let’s create a folder called wordpress, and, in that folder, create a file called docker-compose.yaml. We will be exporting the wordpress container on the 8000 port of the host system.

When we start an application with Docker Compose, it creates a user-defined network on which it attaches the containers for the application. The containers communicate over that network. As we have configured Docker Machine to connect to our dockerhost, Docker Compose would also use that.

Now, with the docker-compose up command, we can deploy the application. With docker-compose ps command, we can list the containers created by Docker Compose, and with docker-compose down, we can stop and remove the containers. This also removes the network associated with the application. To additionally delete the associated volume, we need to pass the -v option with the docker-compose down command.

 Want to learn more? Access all the free sample chapter videos now!

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

This series previews the new self-paced Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation. In earlier articles, we installed Docker, introduced Docker Machine, performed basic Docker container and image operations, and looked at Dockerfiles and Docker Hub.

In this article, we’ll talk a bit about Docker Volumes and networking. To create a volume, we use the docker volume create command. And, to list the volumes, we use the docker volume list command.

To mount the volume inside a container, we need to use the -v option with the docker container run command. For example, we can mount the myvol volume inside the container at the /data location. After moving into the /data folder, we create two files there.

Next, we come out of the container and create a new container from the busybox image, but mounting the same myvol volume. The files that we created in the earlier container are available under /data. This way, we can share the content between the containers using the volumes. You can watch both of the videos below for details.

 To review Docker networking, we first create a container from the nginx image. With the docker container inspect command, we can get the container’s IP address, but that IP address would be given by the docker0 bridge, which would not be accessible from the external world. 

To access the container from the external world, we need to do port mapping between the host port and the container port. So, with the -p option added to the docker container run command, we can map the host port with the container port. For example, we can map Port 8080 of the host system with Port 80 of the container.

Once the port is mapped, we can access the container from the dockerhost by accessing the dockerhost on Port 8080.

 Want to learn more? Access all the free sample chapter videos now!

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

In this series previewing the self-paced Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation, we’ve covered installing Docker, introduced Docker Machine, and some basic commands for performing Docker container and image operations. In the three sample videos below, we’ll take a look at Dockerfiles and Docker Hub.

Docker can build an image by reading the build instructions from a file that’s generally referred to as Dockerfile. So, first, check your connectivity with the “dockerhost” and then create a folder called nginx. In that folder, we have created a file called dockerfile and in the dockerfile, we have used different instructions, like FROM, RUN, EXPOSE, and CMD.

To build an image, we’ll need to use the docker build command. With the -t option, we can specify the image name, and with a “.” at the end, we are requesting Docker to look at the current folder to find the dockerfile, and then build the image.

On the Docker Hub, we also see the repositories — for example, for nginx, redis, and busybox. For a given repository, you can have different tags, which will define the individual image. On the repository page, we can also see a respective Dockerfile, from which an image is created — for example, you can see the Dockerfile of the nginx image. 

If you don’t have an account on Docker Hub, I recommend creating one at this time. After logging in, you can see all the repositories we’ve created. Note that the repository name is prefixed with our username.

To push an image to Docker Hub, make sure that the image name is prefixed with the username used to log into the Docker Hub account. With the docker image push command, we can push the image to the Docker Registry, which would, by default, go to the Docker Hub.  

DockerHub has a functionality called Docker automated builds, that can trigger a build on DockerHub as soon as you commit a code on your GitHub repository. On GitHub, we have a repository called docker-automated-build, in which we have a Dockerfile, using which the image will be created. In the real-world example, we would have our application code with Dockerfile.

To create the automated build, we need to first log into our DockerHub account, and then, link our GitHub account with DockerHub. Once the GitHub account is linked, we click on “Create” and then on “Create Automated Build.”

Next, we provide a short description and then click on “Create.” Then, we select the GitHub repository that we want to link with this DockerHub automated build procedure. Now, we can go to our GitHub repository and change something there. As soon as we commit the change, a Docker build process would start on our DockerHub account.

Our image build is currently in queue, which will be scheduled eventually, and our image would be created. After that, anybody would be able to download the image.

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

Want to learn more? Access all the free sample chapter videos now!