Posts

DevOps Expert Joins CNCF to Further Best Practices for Cloud Native Operations

SAN FRANCISCO – December 4, 2017 – The Cloud Native Computing Foundation (CNCF), which sustains and integrates open source technologies like Kubernetes and Prometheus, today announced that JFrog joined the Foundation as a Gold Member. A big proponent of open source and cloud native technologies, JFrog leverages technologies like Kubernetes to help its more than 4,000 customers build and release software in a fast, reliable, and secure manner.

“CNCF is excited to have JFrog on board as a Gold Member, further embracing their commitment to open source and the cloud native community,” said Dan Kohn, Executive Director of the Cloud Native Computing Foundation. “JFrog has been a part of the open source community for some time and has implemented many cloud native technologies. We appreciate JFrog investing engineering time and financial resources into CNCF projects and initiatives.”

Operated from California, Seattle, Israel, India, Spain and France, the company helps organizations of all sizes to improve software releases. Created by open source developers for the open source community, JFrog’s product and engineering teams are dedicated to OSS technologies and working on cloud native projects. JFrog is a significant contributor in the developer community with the offer of an open source version of Artifactory, the universal binary repository, and fully sponsored cloud infrastructure and commercial accounts for OSS projects with Bintray, the universal binary distribution platform. With over 2 billion downloads per month on Bintray and 60,000 OSS Artifactory servers, JFrog provides the community with the entire lifecycle for effective binary management.

As the company joins CNCF it will introduce support for Helm repositories in Artifactory with the next version release scheduled for December. Consistent with the goal to provide the only Universal artifact support, JFrog Artifactory will now enable developers to build with Kubernetes open-source system. JFrog has been using Kubernetes for development of its products, as well as actively migrating hosted operations to Kubernetes; and the addition of Helm support is considered the next logical step for JFrog and for the community.

“We know that ‘cloud native’ is more than a buzzword, it’s all about better software design and implementation,” said Kit Merker, JFrog VP of Business Development and supporter of Kubernetes open source project during his days as Google product manager for Kubernetes. “For us, joining CNCF is more than just supporting the open source community, it also signals that we are committed to bringing real engineering power to these important projects. Our goal is to contribute significantly to Kubernetes and related projects using our practical experience of creating rapid-delivery software systems.”

As a CNCF member, JFrog plans to allocate resources to support documentation and maintenance of CNCF projects, as well as help promote best practices for cloud native operations.

Additional Resources

About Cloud Native Computing Foundation

Cloud native computing uses an open source software stack to deploy applications as microservices, packaging each part into its own container, and dynamically orchestrating those containers to optimize resource utilization. The Cloud Native Computing Foundation (CNCF) hosts critical components of cloud native software stacks including Kubernetes, Fluentd, linkerd, Prometheus, OpenTracing, gRPC, CoreDNS, containerd, rkt, CNI, Envoy, Jaeger, Notary, and TUF. CNCF serves as the neutral home for collaboration and brings together the industry’s top developers, end users and vendors – including the six largest public cloud providers and many of the leading private cloud companies. CNCF is part of The Linux Foundation, a nonprofit organization. For more information about CNCF, please visit: https://cncf.io/.

About JFrog:

With more than 4,000 customers and over 2 billion downloads per month on its binaries hub, JFrog is the leading universal solution for the management and distribution of software binaries. JFrog’s four products, JFrog Artifactory, the Universal Artifact Repository; JFrog Bintray, the Universal Distribution Platform; JFrog Mission Control, for Universal DevOps Flow Management; and JFrog Xray, Universal Component Analyzer, are used by Dev and DevOps engineers worldwide and are available as open-source, on-premise, and SaaS cloud solutions. Customers include some of the world’s top brands, such as Amazon, Google, Uber, Netflix, Twitter, Cisco, Oracle, Adobe, Salesforce, VMware, and Slack. The company is privately held and operated from California, Seattle, Israel, India, and France. More information can be found at jfrog.com.

###

“Cloud Native Computing Foundation”, “CNCF” and “Kubernetes” are registered trademarks of The Linux Foundation in the United States and other countries. “Certified Kubernetes” and the Certified Kubernetes design are trademarks of The Linux Foundation in the United States and other countries.

Media Contact

Natasha Woods

The Linux Foundation

(415) 312-5289

PR@CNCF.io

OS Summit keynotes

Watch keynotes and technical sessions from OS Summit and ELC Europe here.If you weren’t able to attend Open Source Summit and Embedded Linux Conference (ELC) Europe last week, don’t worry! We’ve recorded keynote presentations from both events and all the technical sessions from ELC Europe to share with you here.

Check out the on-stage conversation with Linus Torvalds and VMware’s Dirk Hohndel, opening remarks from The Linux Foundation’s Executive Director Jim Zemlin, and a special presentation from 11-year-old CyberShaolin founder Reuben Paul. You can watch these and other ELC and OS Summit keynotes below for insight into open source collaboration, community and technical expertise on containers, cloud computing, embedded Linux, Linux kernel, networking, and much more.

And, you can watch all 55+ technical sessions from Embedded Linux Conference here.

By Fatih Degirmenci, Yolanda Robla Mota, Markos Chandras

The OPNFV Community will soon issue its fifth release, OPNFV Euphrates. Over the past four releases, the community has introduced different components from upstream projects, integrated them to compose different flavors of the stack, and put them through extensive testing to help establish a reference platform for Network Functions Virtualization (NFV). While doing this work, the OPNFV community strictly followed its founding principle: Upstream First. Bugs found or features identified as missing are implemented directly into upstream code; OPNFV has carried very little in its own source code repositories, reflecting the project’s true upstream nature. This was achieved by the use of stable release components from the upstream communities. In addition to the technical aspects of the work, OPNFV established good relationships with these upstream communities, such as OpenStack, OpenDaylight, FD.io, and others.

Building on previous experience working on integrating and testing different components of the stack, Euphrates brings applied learnings in Continuous Delivery (CD) and DevOps principles and practices into the fray, via the Cross Community Continuous Integration (XCI) initiative.  Read below for a quick summary about what it is, where we are now, what we are releasing as part of Euphrates, and a sneak peek into the future.

Upstream Development Model
The current development and release model employed by OPNFV provides value to OPNFV community itself and the upstream communities it works with, but is limited and dependent on using stable versions of upstream components. This essentially limits the speed at which new development and bugfixes can be contributed to upstream projects. This results in losing the essence of CI (finding issues, providing fast and tailored feedback) and means that the developers who contribute to upstream projects might not see results for several months, after everyone has moved on to the next item in their roadmap. The notion of constantly playing “catch up” with upstream projects is not sustainable.

In order for OPNFV to achieve true CI, we need to ensure that upstream communities implement a CD approach. One way to make this happen is to enable patch-level testing and consuming of components from master branches of upstream communities–allowing for more timely feedback when it matters most. The XCI initiative establishes new feedback loops across communities and with supporting tooling makes it possible to:

  • shorten the time it takes to introduce new features
  • make it easier to identify and fix bugs
  • ease the effort to develop, integrate, and test the reference platform
  • establish additional feedback loops within OPNFV, towards the users and between the communities OPNFV works with
  • provide additional testing from a production-like environment
  • increase real-time visibility

Apart from providing feedback to upstream communities, we strive to frequently provide working software to our users, allowing them to be part of the feedback loop. This ensures that while OPNFV pushes upstream communities to CD, the platform itself also moves in the same direction.

Helping Developers Develop by Supporting Source-Based Deployments
One of the most important aspects of XCI is to ensure developers do what they do best: develop. XCI achieves this by supporting source-based deployments. This means that developers can patch the source on their workstations and get their patch deployed quickly, cutting the feedback time from months to hours (or even minutes). The approach employed by XCI to enable source-based deployments ensures that nothing comes between developers and the source code who can even override whatever is provided by XCI to ensure the deployment fits their needs. Additionally, users also benefit as they can adjust what they get from XCI to further fit their needs. This is also important for patch-level testing and feedback.

Choice
What we summarized until now are firsts for OPNFV and perhaps firsts for the entire open source ecosystem; bringing multiple open source components together from master. But we have a few other firsts provided by XCI as part of the Euphrates release, such as:

  • multiple deployment flavors ranging from all-in-one to full blown HA deployment
  • multi-distro support: Ubuntu, Centos, and openSUSE
  • extended CI pipelines for all projects that choose to take part in XCI

This is another focus area of XCI: giving choice. We believe that if we offer choices to developers and users, they will leverage these options to invent new things or use them in new and different ways. XCI empowers the community by removing barriers and constraints and providing freedom of choice.

XCI utilizes tools such as Bifrost and OpenStack Ansible directly from upstream and what is done by XCI is to use these tools in a way that enables CI.

Join the Party
Are we done yet? Of course not. We are working on bringing even more components together and are reaching out to additional communities, such as ONAP and Kubernetes.

If you would like to be part of this, check the documentation and try using the XCI Sandbox to bring up a mini OPNFV cluster on your laptop. You can find XCI developers on #opnfv-pharos channel on Freenode and while you are there, join us to make things even better.

Finally, we would like to thank everyone who has participated in the development of XCI, reviewed our patches, listened to our ideas, provided hardware resources, motivated us in different ways, and, most importantly, encouraged us. What we have now is just the beginning and we are on our way to change things.

Heading to Open Source Summit Europe? Don’t miss Fatih’s presentation, “Bringing Open Source Communities Together: Cross-Community CI,” Monday, October 23, 14:20 – 15:00.

Learn more about XCI by reading the Solutions Brief or watching the video, and signing up for this XCI-based webinar on November 29th.

This article originally appeared on the OPNFV website.

 

All Things Open

Join The Linux Foundation at All Things Open; check out conference highlights below. (Image: All Things Open)

Going to All Things Open in Raleigh? While you’re there, be sure stop by The Linux Foundation training booth for fun giveaways and a chance to win one of two Raspberry Pi kits. Two winners will be chosen onsite on the last day of the conference, Oct. 24, at 3:05pm.

Other booth giveaways include The Linux Foundation branded webcam covers, The Linux Foundation projects’ stickers, Tux stickers, Linux.com stickers, as well as free ebooks: The SysAdmin’s Essential Guide to Linux Workstation Security, Practical GPL Compliance, A Guide to Understanding OPNFV & NFV, and the Open Source Guide Volume 1.

Be sure to check out these featured conference talks, including the Linux on the Mainframe session where John Mertic and Len Santalucia discuss how they’ve worked to create an open source, technical community where industry participants can collaborate around the use of the Linux and open source in a mainframe computing environment. And don’t miss ODPi’s session on the simplification and standardization of the Big Data ecosystem with common reference specifications and test suites.

Session Highlights

  • Accelerating Big Data Implementations For the Connected World – John Mertic
  • Advancing the Next-Generation Open Networking Stack – Phil Robb
  • Flatpak: The Portable, Secure Distribution of Desktop ApplicationsOwen Taylor
  • Intel: Core Linux Enabling Case Study and Demo
  • Integrating Linux Systems With Active Directory Using Open Source Tools – Dmitri Pal
  • Linux On the Mainframe: Linux Foundation and The Open Mainframe Project – John Mertic & Len Santalucia
  • Polyglot System Administration AKA: Don’t Fear the Other Language – Jakob Lorberblatt
  • The Next Evolution of The Javascript Ecosystem – Kris Borchers
  • The Revolution Will Not Be Distributed – Michael Hall
  • You Think You’re Not A Target? A Tale Of Three Developers – Chris Lamb

ODPi and Open Mainframe will also a have booth at All Things Open. Get your pass to All Things Open and stop by to learn more!

 

In this series previewing the self-paced Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation, we’ve covered installing Docker, introduced Docker Machine, and some basic commands for performing Docker container and image operations. In the three sample videos below, we’ll take a look at Dockerfiles and Docker Hub.

Docker can build an image by reading the build instructions from a file that’s generally referred to as Dockerfile. So, first, check your connectivity with the “dockerhost” and then create a folder called nginx. In that folder, we have created a file called dockerfile and in the dockerfile, we have used different instructions, like FROM, RUN, EXPOSE, and CMD.

To build an image, we’ll need to use the docker build command. With the -t option, we can specify the image name, and with a “.” at the end, we are requesting Docker to look at the current folder to find the dockerfile, and then build the image.

On the Docker Hub, we also see the repositories — for example, for nginx, redis, and busybox. For a given repository, you can have different tags, which will define the individual image. On the repository page, we can also see a respective Dockerfile, from which an image is created — for example, you can see the Dockerfile of the nginx image. 

If you don’t have an account on Docker Hub, I recommend creating one at this time. After logging in, you can see all the repositories we’ve created. Note that the repository name is prefixed with our username.

To push an image to Docker Hub, make sure that the image name is prefixed with the username used to log into the Docker Hub account. With the docker image push command, we can push the image to the Docker Registry, which would, by default, go to the Docker Hub.  

DockerHub has a functionality called Docker automated builds, that can trigger a build on DockerHub as soon as you commit a code on your GitHub repository. On GitHub, we have a repository called docker-automated-build, in which we have a Dockerfile, using which the image will be created. In the real-world example, we would have our application code with Dockerfile.

To create the automated build, we need to first log into our DockerHub account, and then, link our GitHub account with DockerHub. Once the GitHub account is linked, we click on “Create” and then on “Create Automated Build.”

Next, we provide a short description and then click on “Create.” Then, we select the GitHub repository that we want to link with this DockerHub automated build procedure. Now, we can go to our GitHub repository and change something there. As soon as we commit the change, a Docker build process would start on our DockerHub account.

Our image build is currently in queue, which will be scheduled eventually, and our image would be created. After that, anybody would be able to download the image.

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

Want to learn more? Access all the free sample chapter videos now!

In this series, we’re sharing a preview of the new self-paced Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation. In earlier articles, we looked at installing Docker and setting up your environment, and we introduced Docker Machine. Now we’ll take a look at some basic commands for performing Docker container and image operations. Watch the videos below for more details.

To do container operations, we’ll first connect to our “dockerhost” with Docker Machine. Once connected, we can start the container in the interactive mode and explore processes inside the container.

For example, the “docker container ls” command lists the running containers. With the “docker container inspect” command, we can inspect an individual container. Or, with the “docker container exec” command, we can fork a new process inside an already running container and do some operations. We can use the “docker container stop” command to stop a container and then remove a stopped container using the “docker container rm” command.

To do Docker image operations, again, we first make sure we are connected to our “dockerhost” with Docker Machine, so that all the Docker commands are executed on the “dockerhost” running on the DigitalOcean cloud.

The basic commands you need here are similar to above. With the “docker image ls” command, we can list the images available on our “dockerhost”. Using the “docker image pull” command, we can pull an image from our Docker Registry. And, we can remove an image from the “dockerhost” using the “docker image rm” command.

Want to learn more? Access all the free sample chapter videos now! 

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

Containers are becoming the de facto approach for deploying applications, because they are easy to use and cost-effective. With containers, you can significantly cut down the time to go to market if the entire team responsible for the application lifecycle is involved — whether they are developers, Quality Assurance engineers, or Ops engineers.

The new Containers for Developers and Quality Assurance (LFS254) self-paced course from The Linux Foundation is designed for developers and Quality Assurance engineers who are interested in learning the workflow of an application with Docker. In this self-paced course, we will quickly review some Docker basics, including installation, and then, with the help of a sample application, we will walk through the lifecycle of that application with Docker.

The online course is presented almost entirely on video and some of the topics covered in this course preview include:

  • Overview and Installation

  • Docker Machine

  • Docker Container and Image Operations

  • Dockerfiles and Docker Hub

  • Docker Volumes and Networking

  • Docker Compose

Access a free sample chapter

In the course, we focus on creating an end-to-end workflow for our application — from development to production. We’ll use Docker as our primary container environment and Jenkins as our primary CI/CD tool. All of the Docker hosts used in this course will be deployed on the cloud (DigitalOcean).

Install Docker

You’ll need to have Docker installed in order to work along with the course materials. All of Docker’s free products come under the Docker Community Edition. They’re offered in two variants: edge and stable. All of the enterprise and production-ready products come under the Docker Enterprise Edition umbrella.

And, you can download all the Docker products from the Docker Store. For this course, we will be using the Community edition. So, click on “GET DOCKER CE” to proceed further. If you select “Linux” in the “Operating Systems” section, you’ll see that Docker is available on all the major Linux distributions, like CentOS, Ubuntu, Fedora, and so on. It’s also available for Mac and Windows.

This preview series is intended to give you a sample of the course format and quality of the content, which is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

Watch the sample videos to learn more:  

Want to learn more? Access all the free sample chapter videos now!

This article series previews the new Containers Fundamentals training course from The Linux Foundation, which is designed for those who are new to container technologies. In previous excerpts, we talked about what containers are and what they’re not and explained a little of their history. In this last post of the series, we will look at the building blocks for containers, specifically, namespaces, control groups, and UnionFS.

Namespace is a feature of the Linux kernel, which isolates and virtualizes system resources for a process, so that each process gets its own resource, like its own IP address, hostname, etc. System resources that can be virtualized are: mount [mnt], process ID [PID], network [net], Interprocess Communication [IPC], hostnames [UTS], and users [User IDs].

Using the namespace feature of the Linux kernel, we can isolate one process from another. The container is nothing but a process for the kernel, so we isolate each container using different namespaces.

Another important feature that enables containerization is control groups. With control groups, we can limit, account, and isolate the resource users like CPU, memory, disk, network, etc.  And, with UnionFS, we can transparently overlay two or more directories and implement a layered approach for containers.

You can get more details in the sample course video below, presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

Want to learn more? Access all the free sample chapter videos now!

In previous excerpts of the new, self-paced Containers Fundamentals course from The Linux Foundation, we discussed what containers are and are not. Here, we’ll take a brief look at the history of containers, which includes chroot, FreeBSD jails, Solaris zones, and systemd-nspawn. 

Chroot was first introduced in 1979, during development of Seventh Edition Unix (also called Version 7), and was added to BSD in 1982. In 2000, FreeBSD extended chroot to FreeBSD Jails. Then, in the early 2000s, Solaris introduced the concept of zones, which virtualized the operating system services.

With chroot, you can change the apparent root directory for the currently running process and its children. After configuring chroot, subsequent commands will run with respect to the new root (/). With chroot, we can limit the processes only at the filesystem level, but they share the resources, like users, hostname, IP address, etc. FreeBSD Jails extended the chroot model by virtualizing users, network sub-systems, etc.

systemd-nspawn has not been around as long as chroot and Jails, but it can be used to create containers, which would be managed by systemd. On modern Linux operating systems, systemd is used as an init system to bootstrap the user space and manage all the processes subsequently.

This training course, presented mainly in video format, is aimed at those who are new to containers and covers the basics of container runtimes, container storage and networking, Dockerfiles, Docker APIs, and more.

You can learn more in the sample course video below, presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook:

Want to learn more? Access all the free sample chapter videos now!

This series provides a preview of the new, self-paced Containers Fundamentals course from The Linux Foundation, which is designed for those who are new to container technologies. The course covers container building blocks, container runtimes, container storage and networking, Dockerfiles, Docker APIs, and more. In the first excerpt, we defined what containers are, and in this installment, we’ll explain a bit further. You can also sign up to access all the free sample chapter videos now.

Note that containers are not lightweight VMs. Both of these tools provide isolation and run applications, but the underlying technologies are completely different. The process of managing them is also different.

VMs are created on top of a hypervisor, which is installed on the host operating system. Containers directly run on the host operating system, without any guest OS of its own. The host operating system provides isolation and does resource allocation to individual containers.

Once you become familiar with containers and would like to deploy them on production, you might ask “Where should I deploy my containers — on VMs, bare metal, in the cloud?”  From the container’s perspective, it does not matter as it can run anywhere. But in reality, many variables affect the decision, such as cost, performance, security, current skill set, and so on.

Find out more in these sample course videos below, taught by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, former Red Hat engineer, Docker Captain, and author of the Docker Cookbook:

Want to learn more? Access all the free sample chapter videos now!