By Chris Donley, Sr Director, Open Source Ecosystems, Huawei; Chair, OPNFV Certification & Compliance Committee

As we kick off 2018, the OPNFV Compliance & Certification committee—the members driven body within OPNFV that defines recommendations to the Board for policies and oversight for compliance and certification—is pleased to announce the launch of the OPNFV Verified Program (OVP). The program is designed to simplify adoption of NFV in commercial products by establishing an industry threshold based on OPNFV releases. The fact we are using an open source platform as referent to measure compliance of commercial products—not necessarily based on its source code—is a new and innovative step for the industry.

The OPNFV Verified Program facilitates both vendor self-testing and third-party lab testing using the Dovetail test suite. In our initial version, we will be testing NFV infrastructure components: NFVI and VIM. In the future, we may expand the program to cover VNFs and other components, as well. In December, just ahead of the launch, we conducted a “beta program” with several vendors: Huawei, Nokia, Wind River, and ZTE. These companies provided valuable feedback while we refined and finalized the program. They also represent the first cohort to received the privilege of using the OPNFV Verified mark and logo. Congratulations to these companies and we welcome additional members of the open NFV ecosystem to join us!

OPNFV Verified Program is designed to help operators establish entry criteria for their trials and RFPs. We have worked closely with end user advisor operators to establish a framework and an initial bar to support their requirements. The program will also reduce operator testing load by identifying a set of common tests and executing them just once under the auspices of the OPNFV Verified Program, rather than many times in many labs. As OPNFV and the industry at large continue to mature, we will steadily raise the bar in future versions as to what becomes verified. We expect two OPNFV Verified versions per year, denoted with the month and the year to make it easy to identify the compliance level of submitted products.

Under the auspices of The Linux Foundation, we are well positioned to expand the program to support other projects in the future. Prior to the official launch, we initiated discussions with related projects on leveraging the program to support the wider open source community. OPNFV’s C&C, the group responsible for chartering the OPNFV Verified Program, is also exploring additional operator use cases that can be incorporated into the compliance test suite.

I am excited about the launch of the OPNFV Verified Program and I hope you will join us in 2018! To operators, I invite you to share your use cases and functional requirements, and please consider incorporating OPNFV Verified into your RFP process or lab trials. To vendors, I hope you’ll download the Dovetail tool and test your commercial offerings. If you’re looking for assistance, several third-party labs are eager to help. Learn more about the OPNFV Verified Program and get started today!

Please direct any questions you may have to verified@opnfv.org.

This article originally appeared at OPNFV.

Enables Server Setup and Boot with a Linux Kernel

The Linux Foundation is pleased to welcome LinuxBoot to our family of open source projects and to support the growth of the project community. LinuxBoot looks to improve system boot performance and reliability by replacing some firmware functionality with a Linux kernel and runtime.

Firmware has always had a simple purpose: to boot the OS. Achieving that has become much more difficult due to increasing complexity of both hardware and deployment. Firmware often must set up many components in the system, interface with more varieties of boot media, including high-speed storage and networking interfaces, and support advanced protocols and security features.

LinuxBoot addresses the often slow, often error-prone, obscured code that executes these steps with a Linux kernel. The result is a system that boots in a fraction of the time of a typical system, and with greater reliability.

This matters in data centers providing cloud services. A data center might have tens of thousands of servers, and even a small failure rate adds up to expensive repairs. LinuxBoot enables organizations to improve operational aspects such as debugging and remediation, as well as functional aspects like powering machines on or off rapidly for elastic loads.

Speed and reliability of the boot process can also be a problem in consumer devices and industrial devices. For IoT, devices in the field may be tough to reach and a boot failure can render a device useless for the customer and even cause safety issues in critical systems.

The LinuxBoot model brings key advantages for users across the broad spectrum of embedded, mobile, and server platforms. Leveraging the massive scale of development of Linux in the boot process gives the user control and support that can’t be achieved any other way.

The technique of using Linux to boot Linux has been common since the early 2000s in supercomputers, consumer electronics, military applications, and many other systems. The LinuxBoot initiative will further refine it so it can be more easily developed and deployed by a broader range of users, from individuals to data center-scale companies.

Organizations involved in LinuxBoot include Google, Facebook, Horizon Computing Solutions, and Two Sigma. The LinuxBoot community welcomes newcomers and invites people to get involved with the project at any level.

To learn more, visit https://www.linuxboot.org/.

The Linux Foundation currently hosts 9 of the 10 largest open source networking projects — a set of thriving global communities, such as ONAP, OPNFV, OpenDaylight, FD.io and others which together form the new networking stack. As a foundation, we believe in harmonization between open source and open standards with an eye towards supporting a range of emerging, network-dependent initiatives. As such, we are proactively working to bring communities with shared goals together to offer more value to those communities as well as to our members participating in multiple projects.

In the four years since OpenDaylight kicked off the open source networking revolution, innovative groups of developers from a range of backgrounds have developed open source offerings at every layer of the stack. It is now time to provide avenues for greater collaboration between those projects, as well as related projects and communities across the ecosystem. Therefore, we are creating a combined administrative structure, The LF Networking Fund (“LFN”), a platform for cross-project collaboration.

LFN will form the basis of collaboration across the network stack, from the data plane into the control plane, to orchestration, automation, end-to-end testing, and more. With 83 member organizations, it has the participation of:

  • 9 of the top 10 open source networking projects
  • More than 60 percent of global mobile subscribers enabled by participating companies
  • Most of the top 10 networking and enterprise vendors
  • Top systems integrators
  • Top cloud providers

*As of January 22, 2018. Subject to change.

Integrated governance, technical independence

Participation in LFN is voluntary; each networking project decides for itself whether and when to join. Under this new initiative, each of the projects will continue to operate under existing meritocratic charters, maintaining their technical independence, community affinities, release roadmaps, and web presence, while staff and financial resources are shared across member projects, via a unified governing board.

The six founding projects of LFN are:

What we can expect to see under this shared governance model is increased community collaboration focused on building a shared technical investment (without risk of fragmentation), while also providing space for inter-project architectural dependencies to flourish (e.g., multi-VIM collaboration, VNF onboarding, etc.). In addition, LFN enhances operational efficiency among existing communities by enabling projects to share development and deployment best practices and resources such as test infrastructure, and to collaborate on everything from architectural integration to industry event participation.

Following the example of the Linux Foundation’s Cloud Native Computing Foundation, LFN will bring similar cohesion to networking communities that in many cases are already working together. Over the past five years, LFN projects have dramatically accelerated networking innovations; together, they will enable data networking advancements at an unprecedented rate for decades to come.

For more information on the The LF Networking Fund (“LFN”), please visit our new website, which includes information on governance, membership, the new charter, and more.

Information related to specific LFN projects — including FD.io, OpenDaylight, OPNFV, ONAP, PDNA, and SNAS — remains available on each individual website.  

Join us at the largest open networking & orchestration event of 2018

We also invite you to join the open networking community at Open Networking Summit North America, March 26-29 in Los Angeles, where we will highlight the collaboration and innovation from LFN’s technical projects that is breaking new ground for end users on their journey towards adoption and deployment of open source networking. ONS will also feature the ONS LFN Developer Forum, a 1.5 day developer-focused forum that takes place prior to the ONS conference. There will be a cross-project plenary, and mix of presentation sessions and opportunities for breakout meetings/hacking in several rooms. Tracks are being programmed through the LFN project technical communities.

LFN members receive an additional 20% discount off current registration pricing. Please email events@linuxfoundation.org to receive your discount code.

For more information, join Arpit Joshipura, General Manager, Networking & Orchestration, at The Linux Foundation in a free webinar, “Open Source Networking: Harmonization 2.0,” Tuesday, Feb. 13, 10:00 a.m. Pacific.

blockchain

While some companies are looking at blockchain’s future impact, the technology is changing our world right now.

Influencers from around the world will gather for the World Economic Forum in Davos, Switzerland next week, where leaders are encouraged “to develop a shared perspective on political, economic, and social topics to embrace positive change globally.” Talks will explore free and open source tools and practices as well as the underlying technologies, and one of the hotly debated subjects will certainly be blockchain.

Blockchain technology, which encompasses smart contracts and distributed ledgers, can be used to record promises, trades, and transactions. It allows everyone in an ecosystem to keep a copy of the common system of record, and nothing can ever be erased or edited. When transactions are processed in blocks according to the ordering of a blockchain, the result is a distributed ledger.

Open source collaboration

Hyperledger is an open source collaborative effort created to advance cross-industry blockchain technologies. Member organizations from within finance, banking, manufacturing, and technology are helping steer the project, which aims to provide the modular components that will allow enterprises to build the solutions they need.

Headlines frequently herald how blockchain technology will revolutionize financial services markets, but blockchain will also have a transformative impact on everything from the food industry to healthcare. While some companies are looking at blockchain’s future impact, the technology is changing our world right now. According to a Forbes article, blockchain is revolutionizing contracts, payment processing, asset protection, and supply chain management. And, a market intelligence report by BIS Research reports that blockchain-driven cost savings of $30 to $40 billion per year will be achieved in trade finance.

“Blockchain has the potential to be highly transformative to any company that processes payments,” Forbes noted. “It can eliminate the need for intermediaries that are common in payment processing today.

The future is now

Meanwhile, blockchain technology is already impacting various industries. In the area of global food supply chain management, for example, Intel is collaborating with the Hyperledger community to implement a modern approach to seafood traceability. Using the Hyperledger Sawtooth framework, the seafood journey can now be recorded from ocean to table.

Dot Blockchain Media (dotBC) is using Hyperledger Sawtooth to build a music content rights registry that will help musicians express their rights  and commercialize their art in an interoperable file format. And, as reported by HealthCareITNews, Change Healthcare just launched an enterprise-scale blockchain network using distributed ledger technology. This Intelligent Healthcare Network, built on Hyperledger Fabric, allows hospitals, physicians, and payers to track the real-time status of healthcare claims, thereby providing greater transparency and efficiency.

Given the potential impact of these and other efforts, Hyperledger is likely to feature prominently in talks at Davos. According to a recent Hyperledger post: “Companies large and small, IT vendors and end-user organizations, consortiums and NGOs — everyone took notice of Hyperledger in 2017 and made moves to get involved. This was evident in the ever increasing Hyperledger membership, which nearly doubled in size.” Hyperledger now has support from 197 organizations, which will allow the project to double the resources they can apply toward building and supporting the community in 2018.

Now is a great time to find out more about blockchain and Hyperledger. Case studies, a webinar, and training resources are available from Hyperledger.org. Additionally, Hyperledger incubates and promotes a range of business blockchain technologies, including distributed ledger frameworks, smart contract engines, client libraries, utility libraries, graphical interfaces, and sample applications. You can find out more about these projects here.

blockchain

While some companies are looking at blockchain’s future impact, the technology is changing our world right now.

Influencers from around the world will gather for the World Economic Forum in Davos, Switzerland next week, where leaders are encouraged “to develop a shared perspective on political, economic, and social topics to embrace positive change globally.” Talks will explore free and open source tools and practices as well as the underlying technologies, and one of the hotly debated subjects will certainly be blockchain.

Blockchain technology, which encompasses smart contracts and distributed ledgers, can be used to record promises, trades, and transactions. It allows everyone in an ecosystem to keep a copy of the common system of record, and nothing can ever be erased or edited. When transactions are processed in blocks according to the ordering of a blockchain, the result is a distributed ledger.

Open source collaboration

Hyperledger is an open source collaborative effort created to advance cross-industry blockchain technologies. Member organizations from within finance, banking, manufacturing, and technology are helping steer the project, which aims to provide the modular components that will allow enterprises to build the solutions they need.

Headlines frequently herald how blockchain technology will revolutionize financial services markets, but blockchain will also have a transformative impact on everything from the food industry to healthcare. While some companies are looking at blockchain’s future impact, the technology is changing our world right now. According to a Forbes article, blockchain is revolutionizing contracts, payment processing, asset protection, and supply chain management. And, a market intelligence report by BIS Research reports that blockchain-driven cost savings of $30 to $40 billion per year will be achieved in trade finance.

“Blockchain has the potential to be highly transformative to any company that processes payments,” Forbes noted. “It can eliminate the need for intermediaries that are common in payment processing today.

The future is now

Meanwhile, blockchain technology is already impacting various industries. In the area of global food supply chain management, for example, Intel is collaborating with the Hyperledger community to implement a modern approach to seafood traceability. Using the Hyperledger Sawtooth framework, the seafood journey can now be recorded from ocean to table.

Dot Blockchain Media (dotBC) is using Hyperledger Sawtooth to build a music content rights registry that will help musicians express their rights  and commercialize their art in an interoperable file format. And, as reported by HealthCareITNews, Change Healthcare just launched an enterprise-scale blockchain network using distributed ledger technology. This Intelligent Healthcare Network, built on Hyperledger Fabric, allows hospitals, physicians, and payers to track the real-time status of healthcare claims, thereby providing greater transparency and efficiency.

Given the potential impact of these and other efforts, Hyperledger is likely to feature prominently in talks at Davos. According to a recent Hyperledger post: “Companies large and small, IT vendors and end-user organizations, consortiums and NGOs — everyone took notice of Hyperledger in 2017 and made moves to get involved. This was evident in the ever increasing Hyperledger membership, which nearly doubled in size.” Hyperledger now has support from 197 organizations, which will allow the project to double the resources they can apply toward building and supporting the community in 2018.

Now is a great time to find out more about blockchain and Hyperledger. Case studies, a webinar, and training resources are available from Hyperledger.org. Additionally, Hyperledger incubates and promotes a range of business blockchain technologies, including distributed ledger frameworks, smart contract engines, client libraries, utility libraries, graphical interfaces, and sample applications. You can find out more about these projects here.

open source networking

Arpit Joshipura, Networking General Manager at The Linux Foundation, discussed open source networking trends at Open Source Summit Europe.

Ever since the birth of local area networks, open source tools and components have driven faster and more capable network technologies forward. At the recent Open Source Summit event in Europe, Arpit Joshipura, Networking General Manager at The Linux Foundation, discussed his vision of open source networks and how they are being driven by full automation.

“Networking is cool again,” he said, opening his keynote address with observations on software-defined networks, virtualization, and more. Joshipura is no stranger to network trends. He has led major technology deployments across enterprises, carriers, and cloud architectures, and has been a steady proponent of open source.

“This is an extremely important time for our industry,” he said. “There are more than 23 million open source developers, and we are in an environment where everyone is asking for faster and more reliable services.”

Transforming telecom

As an example of transformative change that is now underway, Joshipura pointed to the telecom industry. “For the past 137 years, we saw proprietary solutions,” he said. “But in the past several years, disaggregation has arrived, where hardware is separated from software. If you are a hardware engineer you build things like software developers do, with APIs and reusable modules.  In the telecom industry, all of this is helping to scale networking deployments in brand new, automated ways.”

Joshipura especially emphasized that automating cloud, network and IoT services will be imperative going forward. He noted that enterprise data centers are working with software-defined networking models, but stressed that too much fragmented and disjointed manual tooling is required to optimize modern networks.

Automating services

“In a 5G world, it is mandatory that we automate services,” he said. “You can’t have an IoT device sitting on the phone and waiting for a service.” In order to automate network services, Joshipura foresees data rates increasing by 100x over the next several years, bandwidth increasing by 10x, and latencies decreasing to one-fifth of what we tolerate now.

The Linux Foundation hosts several open source projects that are key to driving networking automation. For example, Joshipura noted EdgeX Foundry and its work on IoT automation, and Cloud Foundry’s work with cloud-native applications and platforms. He also pointed to broad classes of open source networking tools driving automation, including:

  • Application layer/app server technologies
  • Network data analytics
  • Orchestration and management
  • Cloud and virtual management
  • Network control
  • Operating systems
  • IO abstraction & data path tools
  • Disaggregated hardware

Tools and platforms

Joshipura also discussed emerging, open network automation tools. In particular, he described ONAP (Open Network Automation Platform), a Linux Foundation project that provides a comprehensive platform for real-time, policy-driven orchestration and automation of physical and virtual network functions that will enable software, network, IT and cloud providers and developers to rapidly automate new services and support complete lifecycle management. Joshipura noted that ONAP is ushering in faster services on demand, including 4G, 5G and business/enterprise solutions.

“ONAP is one of the fastest growing networking projects at The Linux Foundation,” he said, pointing to companies working with ONAP ranging from AT&T to VMware.

Additionally, Joshipura highlighted OPNFV, a project that facilitates the development and evolution of NFV components across open source ecosystems. Through system level integration, deployment and testing, OPNFV creates a reference NFV platform to accelerate the transformation of enterprise and service provider networks. He noted that OPNFV now offers container support and that organizations are leveraging it in conjunction with Kubernetes and OpenStack.

To learn more about the open source tools and trends that are driving network automation, watch Joshipura’s entire keynote address below:
Additionally, registration is open for the Open Networking Summit North America. Taking place March 26-29 in Los Angeles, its the industry’s premier open networking event that brings together enterprises, carriers and cloud service providers across the ecosystem to share learnings, highlight innovation and discuss the future of Open Source Networking.

Learn more and register now!

open source networking

Arpit Joshipura, Networking General Manager at The Linux Foundation, discussed open source networking trends at Open Source Summit Europe.

Ever since the birth of local area networks, open source tools and components have driven faster and more capable network technologies forward. At the recent Open Source Summit event in Europe, Arpit Joshipura, Networking General Manager at The Linux Foundation, discussed his vision of open source networks and how they are being driven by full automation.

“Networking is cool again,” he said, opening his keynote address with observations on software-defined networks, virtualization, and more. Joshipura is no stranger to network trends. He has led major technology deployments across enterprises, carriers, and cloud architectures, and has been a steady proponent of open source.

“This is an extremely important time for our industry,” he said. “There are more than 23 million open source developers, and we are in an environment where everyone is asking for faster and more reliable services.”

Transforming telecom

As an example of transformative change that is now underway, Joshipura pointed to the telecom industry. “For the past 137 years, we saw proprietary solutions,” he said. “But in the past several years, disaggregation has arrived, where hardware is separated from software. If you are a hardware engineer you build things like software developers do, with APIs and reusable modules.  In the telecom industry, all of this is helping to scale networking deployments in brand new, automated ways.”

Joshipura especially emphasized that automating cloud, network and IoT services will be imperative going forward. He noted that enterprise data centers are working with software-defined networking models, but stressed that too much fragmented and disjointed manual tooling is required to optimize modern networks.

Automating services

“In a 5G world, it is mandatory that we automate services,” he said. “You can’t have an IoT device sitting on the phone and waiting for a service.” In order to automate network services, Joshipura foresees data rates increasing by 100x over the next several years, bandwidth increasing by 10x, and latencies decreasing to one-fifth of what we tolerate now.

The Linux Foundation hosts several open source projects that are key to driving networking automation. For example, Joshipura noted EdgeX Foundry and its work on IoT automation, and Cloud Foundry’s work with cloud-native applications and platforms. He also pointed to broad classes of open source networking tools driving automation, including:

  • Application layer/app server technologies
  • Network data analytics
  • Orchestration and management
  • Cloud and virtual management
  • Network control
  • Operating systems
  • IO abstraction & data path tools
  • Disaggregated hardware

Tools and platforms

Joshipura also discussed emerging, open network automation tools. In particular, he described ONAP (Open Network Automation Platform), a Linux Foundation project that provides a comprehensive platform for real-time, policy-driven orchestration and automation of physical and virtual network functions that will enable software, network, IT and cloud providers and developers to rapidly automate new services and support complete lifecycle management. Joshipura noted that ONAP is ushering in faster services on demand, including 4G, 5G and business/enterprise solutions.

“ONAP is one of the fastest growing networking projects at The Linux Foundation,” he said, pointing to companies working with ONAP ranging from AT&T to VMware.

Additionally, Joshipura highlighted OPNFV, a project that facilitates the development and evolution of NFV components across open source ecosystems. Through system level integration, deployment and testing, OPNFV creates a reference NFV platform to accelerate the transformation of enterprise and service provider networks. He noted that OPNFV now offers container support and that organizations are leveraging it in conjunction with Kubernetes and OpenStack.

To learn more about the open source tools and trends that are driving network automation, watch Joshipura’s entire keynote address below:
Additionally, registration is open for the Open Networking Summit North America. Taking place March 26-29 in Los Angeles, its the industry’s premier open networking event that brings together enterprises, carriers and cloud service providers across the ecosystem to share learnings, highlight innovation and discuss the future of Open Source Networking.

Learn more and register now!

[vc_row type=”in_container” full_screen_row_position=”middle” scene_position=”center” text_color=”dark” text_align=”left” overlay_strength=”0.3″][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” width=”1/1″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][image_with_animation image_url=”20613″ alignment=”center” animation=”Fade In” box_shadow=”none” max_width=”100%”][/vc_column][/vc_row][vc_row type=”in_container” full_screen_row_position=”middle” scene_position=”center” text_color=”dark” text_align=”left” overlay_strength=”0.3″][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” width=”1/1″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][vc_column_text]From the outside, open source projects may seem somewhat messy and chaotic with no obvious central leadership or reporting structure. But most successful open source projects actually have a great deal of structure and method behind their technical decision-making.

If you explore the Wiki pages of EdgeX Foundry, you will see several references to the project’s architectural “tenets”.  These are the principles that guide how the project’s contributors and technical steering committee decide what changes are accepted into the project, what features will be pursued, and ultimately what technology the group will advance together.

The tenets of EdgeX did not get established overnight.  They are not some sort of religious doctrine or commandments (although some of us would like to see them carved in stone someday). We didn’t blindly establish them because it fit within today’s software mantra.  

No, the EdgeX tenets have evolved from industry-wide collaboration that addresses the use cases and challenges of edge computing.  More specifically, they evolved through trial and practice in Project Fuse, which Dell started more than two years ago and donated to The Linux Foundation earlier this year to seed EdgeX Foundry.  These tenets represent the imbued lessons learned while building EdgeX Foundry, and they are the bedrock that will allow the EdgeX community and the commercial ecosystem around the project to continue to grow and thrive.  

If you are considering joining the EdgeX community as a developer, these tenets allow you continue to live those principles in your contributions, or work with the community to suggest other means to achieve the same results.

And if you aim to see widespread adoption of your own open source platform, you will find many of the same principles of openness, flexibility, and portability will apply.

I. EdgeX Foundry must be platform, protocol, and stack agnostic.

This first principle is key to attracting a diverse user base and developer community.

Dell built Fuse, which became the source code base for EdgeX, to support the software needs on our own gateway.  So why would we want to build a software platform that runs anywhere and can be created with any programming language or tool set? Simple – one size does not fit all edge/Internet of Things (IoT) needs. EdgeX must be agnostic with regard to hardware, operating system (Linux, Windows, etc.), distribution (allowing for the distribution of functionality through microservices at the edge, on a gateway, in the fog, on cloud, etc.), and protocols.

IoT solutions are being implemented with Raspberry Pi and Arduino as well as industrial grade hardware. These will often be distributed from cloud systems to the sensor edge (often referred to as the fog), where the platform choices expand even more.  [/vc_column_text][/vc_column][/vc_row][vc_row type=”in_container” full_screen_row_position=”middle” scene_position=”center” text_color=”dark” text_align=”left” overlay_strength=”0.3″][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” width=”1/1″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][image_with_animation image_url=”20625″ alignment=”center” animation=”Fade In” box_shadow=”none” max_width=”100%”][/vc_column][/vc_row][vc_row type=”in_container” full_screen_row_position=”middle” scene_position=”center” text_color=”dark” text_align=”left” overlay_strength=”0.3″][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” width=”1/1″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][vc_column_text]An IoT architect should be concerned with “what runs best where?” based on use case and other requirements, instead of“will it even run there?” questions. – that defeats the interoperability concern that EdgeX Foundry is trying to address.  The nature of IoT / fog deployments is heterogeneous platforms not homogeneous platforms.

EdgeX also needs to be protocol agnostic. Today, IoT developers face a “protocol soup.” Legacy equipment isn’t going to go away anytime soon, and newer protocols like BLE and Zigbee are being used by more modern sensors/devices.  All of these devices and sensors, regardless of protocol, need to talk to each other.

EdgeX has to serve as the United Nations in the protocol soup – offering a universal translator to all the devices/sensors as well as the north side enterprise and cloud applications.

[/vc_column_text][/vc_column][/vc_row][vc_row type=”in_container” full_screen_row_position=”middle” scene_position=”center” text_color=”dark” text_align=”left” overlay_strength=”0.3″][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” width=”1/1″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][vc_column_text]When people ask “why doesn’t EdgeX just use X?” in communicating and representing edge data/commands, where X is their favorite protocol, format, or object model for IoT communications.  My answer is always the same…as soon as the entire edge/IoT/fog community has adopted X then EdgeX will adopt X as its only means to deal with and provide the sensor/device data and commands.  Until then, EdgeX will try to adopt X with community help as an option at the south, north or both edges, but it cannot only offer that.

EdgeX is even agnostic with regard to development environment/tools used to create microservices. Two things are accomplished by being inclusive and supportive of all development communities.  First, it allows existing solutions, new best-of-breed solutions and alternative solutions for certain use cases – embodied through microservices – to be incorporated into EdgeX with the lowest barrier of entry and without impacting the other parts of the system.  Second, it allows the EdgeX community to grow without requiring potential community members to learn and adopt a certain set of technologies that will most certainly change over time.

Our attitude with EdgeX must be BYOT – “bring your own technology” – and abide by the EdgeX API set.  It is the microservice APIs that serve as the thing we must agree on – not the technology stack.

II. EdgeX Foundry must be extremely flexible.  

This second principle allows rapid iteration and technology development. It also provides the basis for commercial interests to realize ROI (return on investment) which incentivizes them to contribute back to the project.

Any part of the platform may be upgraded, replaced or augmented by other microservices or software components.  It must allow services to scale up and down based on device capability and use case.  EdgeX Foundry should provide “reference implementation” services but encourages best-of-breed solutions.

EdgeX is essentially a Lego set of microservices.  It became very evident to those on the Dell Fuse Project that IoT solutions are going to be different for everyone.  Your analytics are not my analytics.  The connectors you need to your devices are not the connectors I need for my sensors.  How you log issues may not be how I log issues.  In his book Building the Internet of Things, Maciej Kranz states that no single company will be able to build and provide a complete IoT solution for your organization.  It’s too big.

We needed to build a system that allowed for the service to exist, but to be implemented or extended in unique ways for the use case or circumstances of the environment the system was used in.  As I have said to several people joining EdgeX – don’t look at the existing code as the crown jewel.  It is the flexible architecture that welcomes and adopts rapid change by having well defined APIs and a strong craving for interoperability that is the crown jewel.  Microservices are at the heart of trying to satisfy that craving.

Each EdgeX microservice must perform certain duties.  How it does the work may change based on use cases and other requirements.  Let’s face it, you have different resource capabilities when you are running on a cloud server versus running on a Raspberry Pi.  Microservices can be written to take advantage of the resources it has.  The analytics or rules engine microservice might be a simple if/then event processor on a smaller platform, but could be IBM Watson in much more expansive resource settings.  Logging services may persist the log entries for long term search, or an alternate logging implementation may use simple circular files in a file system to keep only the latest entries and reduce resource needs.

While building EdgeX, pesky business questions continued to emerge.  How do we pay for all this?  Where is the “ROI”?

We at Dell call the implementation of the EdgeX microservices provided in the community project today the “reference implementation” of the services.  They provide an implementation to be used by the community and users to get an understanding of what the service needs to do and how it fits into the broader system.

However, it is anticipated that over time, others in the community are going to build better implementations.  They may be faster, smaller, provide additional capability, etc.  These other implementations may address needs of specific use cases or needs of a specific vertical market.  These other implementations may be offered back into the community as open source editions or may be proprietary products where creators can get a return on its participation investment in EdgeX.

Going forward, I see a place for commercial value add – organizations that develop EdgeX replacement services that do more (or less with some intelligence to know when to apply each) for an EdgeX deployment based on cost and resource availability.  That which is “table stakes” and just providing the base platform will probably be part of the open source product.  But even table stake propositions will change over time as better technology, approaches, and process find better ways to tackle common and standard needs.  We need to allow for better reference implementations going forward – that is we need to accept the better mouse traps.

EdgeX is built on an Apache 2 license.  One way that ROI can be achieved through EdgeX is through the replacement services. EdgeX serves as a platform that enables interoperability and establishing a base platform while simultaneously offering growth of the ecosystem and emergence of better solutions.

The organizations participating in EdgeX are not in the project as an academic exercise.  We want to grow the IoT marketplace and generate more revenue.  EdgeX provides a base to start, but allows for commercial endeavors based on value add.

III. EdgeX Foundry must provide for store and forward capability (to support disconnected/remote edge systems).  

The next three principles are aimed at solving specific technical problems that the EdgeX project aims to address. But they also help to demonstrate the flexibility needed to grow user adoption and developer contributions.

Systems at the edge are going to operate in variable environments.  Continual connectivity to the enterprise and cloud cannot be taken for granted – in fact, in some environments, it is almost a guarantee that the systems will be disconnected for extended periods of time.  In transportation use cases, the shipping container or box car where EdgeX is deployed that moves across geographies will often drop in and out of connectivity.

Therefore, EdgeX must provide the means to be self-sufficient for extended periods – collecting its data and doing what it can to use local analytics/intelligence and actuation as needed until connectivity is re-established.  When reconnected, the data collected can then be pushed northward and any new instructions pushed southward.

Some IoT solutions emphasize “streaming.”  Streaming everything all the time is not supportive in many deployments.

IV. EdgeX must support and facilitate “intelligence” moving closer to the edge.

Have you priced the cost to ship all the data from the hundreds of sensors you are going to have to the cloud?  Have you priced out the size of the pipe you need to deliver that data to the cloud?  Once it gets to the cloud, have you priced out what it is going to cost to store all the data or comb through it to get the gems of intelligence from the data?  Early IoT solutions had suggested a “sensor-to-cloud” model. It became evident that this model was not often going to scale for many use cases given the volumes of data being produced.

At the edge, something has to do a better job of filtering the information – that is determining real news from what is just run of the mill readings.  A sensor that reports “it’s 72 degrees” every second isn’t typically important until it detects a significant change in temperature.  The platform at the edge has to have more intelligence in order to lower the cost to ship, store and munch through all that data.

In a similar fashion, what is the timeframe your systems have to read the data and make conclusions about how to act on that data?  In a car, there is but a millisecond or two to determine there is a crash and to actuate the airbag.  Do you want the system in your car to make a long-distance call to the cloud to ship the crash data, await a cloud process to determine what to do, and then have your car get a response call back to pop the airbag?

There are plenty of these types of edge use cases.  In fact, the leading premise within IoT circles is that intelligence moving closer to the edge is what will reduce costs and improve ROI (be it safety, cost, maintenance, etc.).  EdgeX was built to operate as close to edge as we can get it and to facilitate the southward tendency of intelligence – whatever level and sophistication of intelligence that may be.  And, per tenet III, it was made to operate independently and disconnected when/where it needs to as well.

V. EdgeX Foundry must support brown and green device/sensor field deployments.

It is said that IoT is the convergence of IT and OT – the convergence of the information technology community and the operational technology community.  IT moves pretty quickly.  If we don’t get a new laptop with new software every few years, we get frustrated at how “slow” things are moving.  Better productivity comes from better, stronger, faster tools.

In OT, the cycle of change works much more slowly.  Factory floors, once running smoothly, don’t like to make a lot of change.  Change is costly and prone to creating new issues which create more costs.  It’s a vicious cycle.  The argument can be made that in OT, better productivity comes by minimizing change once something works.

Some IoT solutions have suggested that we need to upgrade the old OT environments to better facilitate today’s connect-everything world.

In building EdgeX, we did not take that approach.  The life cycle within the IT and OT worlds was taken into account, which is why we created the device service layer and device service SDK.  EdgeX must support the communication with devices (and their associated protocol) across a wide spectrum – from the very old to today’s more modern sensors and protocols.  Modbus was invented in 1979.  It is still used heavily in industrial environments and PLCs.  OT knows it, relies on it in particularly noisy environments, and would be reluctant to replace it.  It’s not going anywhere.  There are many “Modbus” type protocols in the OT world.  Asking OT to move it or lose it is not an effective strategy if IoT is going to scale quickly.

We must be accepting of these old protocols, yet provide a path forward into new protocols when the time comes.  At the same time, we must support newer protocols and even those that are proprietary.  True interoperability is dealing with the full spectrum of devices and sensors not just those found in IT or OT worlds.

VI. EdgeX Foundry must be secure and easily managed.

And finally, anyone in IoT knows that one of the top concerns of organizations implementing IoT solutions (and open source technologies, in general) is security and management of the IoT solution.  We know that in order to succeed in the marketplace, these concerns must be addressed.  It’s a non-starter if EdgeX or any IoT platform cannot be trusted and well managed in today’s IT and OT environments.

Admittedly, this is the tenet EdgeX has the farthest to go to fulfill its mission.  We quickly realized that however we end up accomplishing this, it needs to be adopted by the community.  Community participation and influence are the only way to build this trust.

Conclusion

Hopefully, this post has helped you understand the tenets of EdgeX architecture, how we arrived at them, and how they will help us achieve our goals for the technology as the community and commercial ecosystem grow.

You may have other tenets you think we should adhere to.  You may have an opinion as to how we might want to adjust some that we are using to guide EdgeX design and implementation today. We invite you to join the conversation at https://wiki.edgexfoundry.org/.

We know the implementation has a way to go, but the beauty of the EdgeX architecture is that evolution and improvement can happen (is happening) microservice by microservice.  Join us and bring that opinion into the fold.  Help us make EdgeX even better!  Help us change lives through a better IoT platform.[/vc_column_text][/vc_column][/vc_row]

[vc_row type=”in_container” full_screen_row_position=”middle” scene_position=”center” text_color=”dark” text_align=”left” overlay_strength=”0.3″][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” width=”1/1″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][image_with_animation image_url=”20613″ alignment=”center” animation=”Fade In” box_shadow=”none” max_width=”100%”][/vc_column][/vc_row][vc_row type=”in_container” full_screen_row_position=”middle” scene_position=”center” text_color=”dark” text_align=”left” overlay_strength=”0.3″][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” width=”1/1″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][vc_column_text]From the outside, open source projects may seem somewhat messy and chaotic with no obvious central leadership or reporting structure. But most successful open source projects actually have a great deal of structure and method behind their technical decision-making.

If you explore the Wiki pages of EdgeX Foundry, you will see several references to the project’s architectural “tenets”.  These are the principles that guide how the project’s contributors and technical steering committee decide what changes are accepted into the project, what features will be pursued, and ultimately what technology the group will advance together.

The tenets of EdgeX did not get established overnight.  They are not some sort of religious doctrine or commandments (although some of us would like to see them carved in stone someday). We didn’t blindly establish them because it fit within today’s software mantra.  

No, the EdgeX tenets have evolved from industry-wide collaboration that addresses the use cases and challenges of edge computing.  More specifically, they evolved through trial and practice in Project Fuse, which Dell started more than two years ago and donated to The Linux Foundation earlier this year to seed EdgeX Foundry.  These tenets represent the imbued lessons learned while building EdgeX Foundry, and they are the bedrock that will allow the EdgeX community and the commercial ecosystem around the project to continue to grow and thrive.  

If you are considering joining the EdgeX community as a developer, these tenets allow you continue to live those principles in your contributions, or work with the community to suggest other means to achieve the same results.

And if you aim to see widespread adoption of your own open source platform, you will find many of the same principles of openness, flexibility, and portability will apply.

I. EdgeX Foundry must be platform, protocol, and stack agnostic.

This first principle is key to attracting a diverse user base and developer community.

Dell built Fuse, which became the source code base for EdgeX, to support the software needs on our own gateway.  So why would we want to build a software platform that runs anywhere and can be created with any programming language or tool set? Simple – one size does not fit all edge/Internet of Things (IoT) needs. EdgeX must be agnostic with regard to hardware, operating system (Linux, Windows, etc.), distribution (allowing for the distribution of functionality through microservices at the edge, on a gateway, in the fog, on cloud, etc.), and protocols.

IoT solutions are being implemented with Raspberry Pi and Arduino as well as industrial grade hardware. These will often be distributed from cloud systems to the sensor edge (often referred to as the fog), where the platform choices expand even more.  [/vc_column_text][/vc_column][/vc_row][vc_row type=”in_container” full_screen_row_position=”middle” scene_position=”center” text_color=”dark” text_align=”left” overlay_strength=”0.3″][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” width=”1/1″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][image_with_animation image_url=”20625″ alignment=”center” animation=”Fade In” box_shadow=”none” max_width=”100%”][/vc_column][/vc_row][vc_row type=”in_container” full_screen_row_position=”middle” scene_position=”center” text_color=”dark” text_align=”left” overlay_strength=”0.3″][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” width=”1/1″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][vc_column_text]An IoT architect should be concerned with “what runs best where?” based on use case and other requirements, instead of“will it even run there?” questions. – that defeats the interoperability concern that EdgeX Foundry is trying to address.  The nature of IoT / fog deployments is heterogeneous platforms not homogeneous platforms.

EdgeX also needs to be protocol agnostic. Today, IoT developers face a “protocol soup.” Legacy equipment isn’t going to go away anytime soon, and newer protocols like BLE and Zigbee are being used by more modern sensors/devices.  All of these devices and sensors, regardless of protocol, need to talk to each other.

EdgeX has to serve as the United Nations in the protocol soup – offering a universal translator to all the devices/sensors as well as the north side enterprise and cloud applications.

[/vc_column_text][/vc_column][/vc_row][vc_row type=”in_container” full_screen_row_position=”middle” scene_position=”center” text_color=”dark” text_align=”left” overlay_strength=”0.3″][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” width=”1/1″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][vc_column_text]When people ask “why doesn’t EdgeX just use X?” in communicating and representing edge data/commands, where X is their favorite protocol, format, or object model for IoT communications.  My answer is always the same…as soon as the entire edge/IoT/fog community has adopted X then EdgeX will adopt X as its only means to deal with and provide the sensor/device data and commands.  Until then, EdgeX will try to adopt X with community help as an option at the south, north or both edges, but it cannot only offer that.

EdgeX is even agnostic with regard to development environment/tools used to create microservices. Two things are accomplished by being inclusive and supportive of all development communities.  First, it allows existing solutions, new best-of-breed solutions and alternative solutions for certain use cases – embodied through microservices – to be incorporated into EdgeX with the lowest barrier of entry and without impacting the other parts of the system.  Second, it allows the EdgeX community to grow without requiring potential community members to learn and adopt a certain set of technologies that will most certainly change over time.

Our attitude with EdgeX must be BYOT – “bring your own technology” – and abide by the EdgeX API set.  It is the microservice APIs that serve as the thing we must agree on – not the technology stack.

II. EdgeX Foundry must be extremely flexible.  

This second principle allows rapid iteration and technology development. It also provides the basis for commercial interests to realize ROI (return on investment) which incentivizes them to contribute back to the project.

Any part of the platform may be upgraded, replaced or augmented by other microservices or software components.  It must allow services to scale up and down based on device capability and use case.  EdgeX Foundry should provide “reference implementation” services but encourages best-of-breed solutions.

EdgeX is essentially a Lego set of microservices.  It became very evident to those on the Dell Fuse Project that IoT solutions are going to be different for everyone.  Your analytics are not my analytics.  The connectors you need to your devices are not the connectors I need for my sensors.  How you log issues may not be how I log issues.  In his book Building the Internet of Things, Maciej Kranz states that no single company will be able to build and provide a complete IoT solution for your organization.  It’s too big.

We needed to build a system that allowed for the service to exist, but to be implemented or extended in unique ways for the use case or circumstances of the environment the system was used in.  As I have said to several people joining EdgeX – don’t look at the existing code as the crown jewel.  It is the flexible architecture that welcomes and adopts rapid change by having well defined APIs and a strong craving for interoperability that is the crown jewel.  Microservices are at the heart of trying to satisfy that craving.

Each EdgeX microservice must perform certain duties.  How it does the work may change based on use cases and other requirements.  Let’s face it, you have different resource capabilities when you are running on a cloud server versus running on a Raspberry Pi.  Microservices can be written to take advantage of the resources it has.  The analytics or rules engine microservice might be a simple if/then event processor on a smaller platform, but could be IBM Watson in much more expansive resource settings.  Logging services may persist the log entries for long term search, or an alternate logging implementation may use simple circular files in a file system to keep only the latest entries and reduce resource needs.

While building EdgeX, pesky business questions continued to emerge.  How do we pay for all this?  Where is the “ROI”?

We at Dell call the implementation of the EdgeX microservices provided in the community project today the “reference implementation” of the services.  They provide an implementation to be used by the community and users to get an understanding of what the service needs to do and how it fits into the broader system.

However, it is anticipated that over time, others in the community are going to build better implementations.  They may be faster, smaller, provide additional capability, etc.  These other implementations may address needs of specific use cases or needs of a specific vertical market.  These other implementations may be offered back into the community as open source editions or may be proprietary products where creators can get a return on its participation investment in EdgeX.

Going forward, I see a place for commercial value add – organizations that develop EdgeX replacement services that do more (or less with some intelligence to know when to apply each) for an EdgeX deployment based on cost and resource availability.  That which is “table stakes” and just providing the base platform will probably be part of the open source product.  But even table stake propositions will change over time as better technology, approaches, and process find better ways to tackle common and standard needs.  We need to allow for better reference implementations going forward – that is we need to accept the better mouse traps.

EdgeX is built on an Apache 2 license.  One way that ROI can be achieved through EdgeX is through the replacement services. EdgeX serves as a platform that enables interoperability and establishing a base platform while simultaneously offering growth of the ecosystem and emergence of better solutions.

The organizations participating in EdgeX are not in the project as an academic exercise.  We want to grow the IoT marketplace and generate more revenue.  EdgeX provides a base to start, but allows for commercial endeavors based on value add.

III. EdgeX Foundry must provide for store and forward capability (to support disconnected/remote edge systems).  

The next three principles are aimed at solving specific technical problems that the EdgeX project aims to address. But they also help to demonstrate the flexibility needed to grow user adoption and developer contributions.

Systems at the edge are going to operate in variable environments.  Continual connectivity to the enterprise and cloud cannot be taken for granted – in fact, in some environments, it is almost a guarantee that the systems will be disconnected for extended periods of time.  In transportation use cases, the shipping container or box car where EdgeX is deployed that moves across geographies will often drop in and out of connectivity.

Therefore, EdgeX must provide the means to be self-sufficient for extended periods – collecting its data and doing what it can to use local analytics/intelligence and actuation as needed until connectivity is re-established.  When reconnected, the data collected can then be pushed northward and any new instructions pushed southward.

Some IoT solutions emphasize “streaming.”  Streaming everything all the time is not supportive in many deployments.

IV. EdgeX must support and facilitate “intelligence” moving closer to the edge.

Have you priced the cost to ship all the data from the hundreds of sensors you are going to have to the cloud?  Have you priced out the size of the pipe you need to deliver that data to the cloud?  Once it gets to the cloud, have you priced out what it is going to cost to store all the data or comb through it to get the gems of intelligence from the data?  Early IoT solutions had suggested a “sensor-to-cloud” model. It became evident that this model was not often going to scale for many use cases given the volumes of data being produced.

At the edge, something has to do a better job of filtering the information – that is determining real news from what is just run of the mill readings.  A sensor that reports “it’s 72 degrees” every second isn’t typically important until it detects a significant change in temperature.  The platform at the edge has to have more intelligence in order to lower the cost to ship, store and munch through all that data.

In a similar fashion, what is the timeframe your systems have to read the data and make conclusions about how to act on that data?  In a car, there is but a millisecond or two to determine there is a crash and to actuate the airbag.  Do you want the system in your car to make a long-distance call to the cloud to ship the crash data, await a cloud process to determine what to do, and then have your car get a response call back to pop the airbag?

There are plenty of these types of edge use cases.  In fact, the leading premise within IoT circles is that intelligence moving closer to the edge is what will reduce costs and improve ROI (be it safety, cost, maintenance, etc.).  EdgeX was built to operate as close to edge as we can get it and to facilitate the southward tendency of intelligence – whatever level and sophistication of intelligence that may be.  And, per tenet III, it was made to operate independently and disconnected when/where it needs to as well.

V. EdgeX Foundry must support brown and green device/sensor field deployments.

It is said that IoT is the convergence of IT and OT – the convergence of the information technology community and the operational technology community.  IT moves pretty quickly.  If we don’t get a new laptop with new software every few years, we get frustrated at how “slow” things are moving.  Better productivity comes from better, stronger, faster tools.

In OT, the cycle of change works much more slowly.  Factory floors, once running smoothly, don’t like to make a lot of change.  Change is costly and prone to creating new issues which create more costs.  It’s a vicious cycle.  The argument can be made that in OT, better productivity comes by minimizing change once something works.

Some IoT solutions have suggested that we need to upgrade the old OT environments to better facilitate today’s connect-everything world.

In building EdgeX, we did not take that approach.  The life cycle within the IT and OT worlds was taken into account, which is why we created the device service layer and device service SDK.  EdgeX must support the communication with devices (and their associated protocol) across a wide spectrum – from the very old to today’s more modern sensors and protocols.  Modbus was invented in 1979.  It is still used heavily in industrial environments and PLCs.  OT knows it, relies on it in particularly noisy environments, and would be reluctant to replace it.  It’s not going anywhere.  There are many “Modbus” type protocols in the OT world.  Asking OT to move it or lose it is not an effective strategy if IoT is going to scale quickly.

We must be accepting of these old protocols, yet provide a path forward into new protocols when the time comes.  At the same time, we must support newer protocols and even those that are proprietary.  True interoperability is dealing with the full spectrum of devices and sensors not just those found in IT or OT worlds.

VI. EdgeX Foundry must be secure and easily managed.

And finally, anyone in IoT knows that one of the top concerns of organizations implementing IoT solutions (and open source technologies, in general) is security and management of the IoT solution.  We know that in order to succeed in the marketplace, these concerns must be addressed.  It’s a non-starter if EdgeX or any IoT platform cannot be trusted and well managed in today’s IT and OT environments.

Admittedly, this is the tenet EdgeX has the farthest to go to fulfill its mission.  We quickly realized that however we end up accomplishing this, it needs to be adopted by the community.  Community participation and influence are the only way to build this trust.

Conclusion

Hopefully, this post has helped you understand the tenets of EdgeX architecture, how we arrived at them, and how they will help us achieve our goals for the technology as the community and commercial ecosystem grow.

You may have other tenets you think we should adhere to.  You may have an opinion as to how we might want to adjust some that we are using to guide EdgeX design and implementation today. We invite you to join the conversation at https://wiki.edgexfoundry.org/.

We know the implementation has a way to go, but the beauty of the EdgeX architecture is that evolution and improvement can happen (is happening) microservice by microservice.  Join us and bring that opinion into the fold.  Help us make EdgeX even better!  Help us change lives through a better IoT platform.[/vc_column_text][/vc_column][/vc_row]