Sign up for this interactive workshop that examines networking and cloud-native technologies side by side.
ONAP and Kubernetes – two of the fastest-growing Linux Foundation projects – are coming together in the next generation of telecom architecture.
ONAP provides a comprehensive platform for real-time, policy-driven orchestration and automation of physical and virtual network functions, and Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Telcos are now examining how these virtual network functions (VNFs) could evolve into cloud-native network functions (CNFs) running on Kubernetes.
“As the next-generation of telco architecture evolves, CSPs are exploring how their Virtual Network Functions (VNFs) can evolve into Cloud-native Network Functions (CNFs), ” said Joshipura. “This seminar will explore what’s involved in migrating from VNFs to CNFs, with a specific focus on the roles played by ONAP and Kubernetes. We hope to see a broad swatch of community members from both the container and networking spaces join us for an engaging and informative discussion in Vancouver.”
Session highlights will include:
Migrating and automating network functions to virtual networking functions to CNFs
Overview of sub-projects focusing on this migration, including cross-cloud CI, ONAP/OVP, FD.io/VPP, etc.
The role for a service mesh, such as like Envoy, Istio, or Linkerd, in connecting CNFs with load balancing, canary deployments, policy enforcement, and more.
What is involved in telcos adopting modern continuous integration / continuous deployment (CI/CD) tools to be able to rapidly innovate and improve their CNFs while retaining confidence in the reliability.
Differing security needs of trusted (open source and vendor-provided) code vs. running untrusted code
The role for security isolation technologies like gVisor or Kata
Requirements of the underlying operating system
Strengths and weaknesses of different network architectures such as multi-interface pods and Network Service Mesh
Status of IPv6 and dual-stack support in Kubernetes
Additional registration is required for this session, but there is no extra fee. Space is limited in the workshop, so reserve your spot soon. And, if you plan to attend, please be willing to participate. Learn more and sign up now!
https://live-linux-foundation.pantheonsite.io/wp-content/uploads/lf_logo.svg00The Linux Foundationhttps://live-linux-foundation.pantheonsite.io/wp-content/uploads/lf_logo.svgThe Linux Foundation2018-07-31 07:00:112020-12-20 07:27:33Join Interactive Workshop on Cloud-Native Network Functions at Open Source Summit
Submit your proposal to speak by June 24 for Open Networking Summit Europe, coming up in Amsterdam.
Join us in Amsterdam September 25 – 27 at Open Networking Summit Europe for over 75 sessions on the latest technologies and topics in open networking, and hear from industry experts, including:
Dr. Paul Doany, CEO, Türk Telecom, talking about redefining the competitive landscape
Dr. Catherine Mulligan, Co-Director, Centre for Cryptocurrency Researchand Engineering, Imperial College London,discussing the intersection of networking and blockchain
Demo: Virtualizing the Central Office for Mobile Services: A collaboration across companies (China Mobile, Red Hat, Cumulus, Quortus, Ettus Research, NetScout, F5, EXFO and Nokia)and multiple open source projects including OPNFV, OpenDaylight, OpenAirInterface, and Open Compute Project.
The full schedule of sessions will be announced in late July along with additional keynote speakers.
There’s still time to submit a speaking proposal! Learn moreabout the CFP process and submit by Sunday, June 24.
https://live-linux-foundation.pantheonsite.io/wp-content/uploads/lf_logo.svg00The Linux Foundationhttps://live-linux-foundation.pantheonsite.io/wp-content/uploads/lf_logo.svgThe Linux Foundation2018-06-22 07:00:582020-12-20 07:30:01First Keynotes Announced for Open Networking Summit Europe! Submit Your Proposal to Speak by June 24
A lot of the interactions between the LF Networking and cloud native communities focus on how these technologies work together and on connecting people from different projects.
As highlighted in the recent Open Source Jobs Report, cloud and networking skills are in high demand. And, if you want to hear about the latest networking developments, there is no one better to talk with than Heather Kirksey, VP, Community and Ecosystem Development, Networking at The Linux Foundation. Kirksey was the Director of OPNFV before the recent consolidation of several networking-related projects under the new LF Networking umbrella, and I spoke with her to learn more about LF Networking (LFN) and how the initiative is working closely with cloud native technologies.
Kirksey explained the reasoning behind the move and expansion of her role. “At OPNFV, we were focused on integration and end-to-end testing across the LFN projects. We had interaction with all of those communities. At the same time, we were separate legal entities, and things like that created more barriers to collaboration. Now, it’s easy to look at them more strategically as a portfolio to facilitate member engagement and deliver solutions to service providers.”
Bringing these six networking projects together lowers barriers, reduces friction, and enables the communities to interact with each other.
Networking Meets Cloud Native
Kirksey said that at the recent KubeCon + CloudNativeCon Europe 2018, there was a lot of discussion around what cloud native network function virtualization (NFV) looks like with Kubernetes and other technologies. She said that the NFV community has already begun integration around cloud native technologies including Kubernetes, Prometheus, Fluentd, and FD.io. And, LF Networking has been working on Container Network Interface (CNI) plugins.
A lot of these interactions between the LF Networking and Kubernetes communities focus on education — how these technologies work together — and connecting with people from different projects including Istio, CNI networking SIG, and others.
“We are just trying to figure out the answers that arise as these projects work together,” she said. In her new role, Kirksey looks at things from an outwardly facing perspective. “We are looking at communities that are outside LF Networking — communities like CNCF — and figuring out what our engagement model should be. We are trying to identify projects that are of interest to us. We are trying to set up some programs that bring value to the ecosystem; a good example would be a compliance program.”
Community is also part of Kirksey’s new role, and she is working to find out what’s needed to help the community create opportunities for interaction and involvement. “We have set up end-user advisory groups, member engagement programs, compliance and certification programs,” she said. The goal is to serve the entire ecosystem around these projects.
Looking at some of the cloud native paradigms of how networking works, it’s simpler for an application developer than it used it be. Initially, these developers took things like interfaces, ports, and subnets and put ‘v’ in front of them and created virtual interfaces, virtual ports, and virtual subnets. But these constructs are not tied to physical ideas anymore, so the approach is different.
“There is a lot of stuff at layer two and layer three that is still complicated, but you don’t want Kubernetes to have to worry about that; you certainly don’t want a Kubernetes-based application to have to worry about that,” Kirksey said, “ We are trying to figure out how we deal with some of the complexities of networking, without bringing the physical baggage with it.”
It’s not just technical challenges that these communities need to solve, there are also people challenges. So many new technologies are emerging that it’s becoming increasingly difficult to find experienced developers, and networking is no exception. According to Kirksey, “People who understand and can do deep network level programming are fairly rare.” And, she said, “The number of people who can program for or contribute to VPP or DPDK is relatively small. They now need to also extend their knowledge to these new technologies.”
Additionally, you can’t just create training programs and train people. “The number of people contributing to these projects is relatively small as it’s new and is still being defined,” she said, “That’s one reality of living at the bleeding edge.”
Understanding what’s going on is the first step in solving a problem. That’s where events like KubeCon + CloudNativeCon become critical as they bring together people from different communities to learn and solve problems. “I learned a lot and started to wrap my head around some of these concepts a little bit more,” Kirksey said.
A lot of cross-pollination happens at events, too. When you meet people with bright ideas, you can adopt those good ideas and good marketing practices and apply them to your own work.
“To be quite blunt, when you see good ideas, you try to harvest them for yourself because, you know, that’s the point of open source,” Kirksey said.
Share your expertise! Submit your proposal to speak at ELC + OpenIoT Summit Europe by July 1.
For the past 13 years, Embedded Linux Conference (ELC) has been the premier vendor-neutral technical conference for companies and developers using Linux in embedded products. ELC has become the preeminent space for product vendors as well as kernel and systems developers to collaborate with user-space developers – the people building applications on embedded Linux.
OpenIoT Summit joins the technical experts paving the way for the new industrial transformation, industry 4.0, along with those looking to develop the skills needed to succeed, for education, collaboration, and deep dive learning opportunities. Share your expertise and present the information needed to lead successful IoT developments, progress the development of IoT solutions, use Linux in IoT, devices, and Automotive, and more.
https://live-linux-foundation.pantheonsite.io/wp-content/uploads/lf_logo.svg00The Linux Foundationhttps://live-linux-foundation.pantheonsite.io/wp-content/uploads/lf_logo.svgThe Linux Foundation2018-06-12 06:40:342020-12-20 07:31:05Speak at ELC + OpenIoT Summit EU – Proposals due by Sunday, July 1
Networking Futures: Innovative ideas on the disruption and change of the landscape of networking and networking enabled markets in the next 3-5 years across: AI, ML, and deep learning impact to networking, SD-WAN, IIOT, Data Insights, Business Intelligence, Blockchain & Telecom, and more.
General Network: Common business, architecture, process or people issues that are important to move the Networking agenda forward in the next 1-2 years.
Service Provider & Cloud Networking (Technical): The containerization of service provider workloads, multi-cloud, 5G, fog, and edge access cloud networking.
Service Provider & Cloud Networking (Business & Architecture):Software-defined packet-optical, mobile edge computing, 4G video/CDN, 5G networking, and incorporating legacy systems behind (legacy enterprise workload migration, role of networking in cloud migration, and interworking of carrier OSS/BSS/FCAPS systems).
Enterprise IT DevOps (Technical): Scale and performance in SDN deployments, expanding container networking, maintaining stability in migration, networking needs of a hybrid cloud/virtualized environment, and figuring out the roadmap from a cost perspective.
Enterprise IT (Business & Architecture): Use cases on IoT and networking from the retail, transportation, utility, healthcare of government sectors, specifically on cost modeling for hybrid environments, automation (network and beyond), analytics, security and risk management/modeling with ML, and NFV for the enterprise.
Watch presentations from Open Networking Summit North America 2018
https://live-linux-foundation.pantheonsite.io/wp-content/uploads/lf_logo.svg00The Linux Foundationhttps://live-linux-foundation.pantheonsite.io/wp-content/uploads/lf_logo.svgThe Linux Foundation2018-06-07 07:19:032020-12-20 07:31:27Last Chance! Speak at Open Networking Summit Europe – Submit by June 24
Wendy Cartee, Nick McKeown, Guru Parulkar, and Chris Wright discuss the first 10 years of software defined networking at Open Networking Summit North America.
In 2008, if you wanted to build a network, you had to build it from the same switch and router equipment that everyone else had, according to Nick McKeown, co-founder of Barefoot Networks, speaking as part of a panel of networking experts at Open Networking Summit North America.
Equipment was closed, proprietary, and vertically integrated with features already baked in, McKeown noted. And, “network management was a dirty word. If you wanted to manage a network of switches, you had to write your own scripts over a lousy, cruddy CLI, and everybody had their own way of doing it in order to try to make their network different from everybody else’s.”
All this changed when Stanford University Ph.D. student Martin Casado had the bold idea to rebuild the Stanford network out of custom-built switches and access points, he said.
“Martin just simply showed that if you lift the control up and out of the switches, up into servers, you could replace the 2,000 CPUs with one CPU centrally managed and it would perform exactly how you wanted, could administered by about 10 people instead of 200. And you could implement the policies of a large institution directly in one place, centrally administered,” said McKeown.
That led to the birth of The Clean Slate program and, shortly afterward, Kate Green from MIT Technology Review coined the term Software Defined Networking (SDN), he said.
“What seemed like a very simple idea, to just separate the control plane from the forwarding plane, define a protocol that is OpenFlow, and enable the research community to build new capabilities and functionality on top of that control plane … caught the attention of the research community and made it very, very easy for them to innovate,’’ said Guru Parulkar, executive director of the Open Networking Foundation.
On the heels of that came the idea of slicing a production network using OpenFlow and a simple piece of software, he said. In one slice you could run a production network, and in another slice you could run an experimental network and show the new capabilities.
The notion of the segregating of the control plane and the data plane brought about a whole new way of doing networking as it became open, along with the intersection of open source and SDN, noted moderator Wendy Cartee, senior director of marketing, Cloud Native Applications, at VMware.
“Building all of this new virtualization technology and bringing it into enterprises and to the world at large, created a need for a type of network programmability” that was happening as the same time as the research, noted Chris Wright, vice president and CTO, at Red Hat. That brought about open source tools like Open vSwitch, “so we could build a type of network topology that we needed in virtualization.”
Confluence of Events
In the beginning, there was much hype about SDN and desegregation and OpenFlow, Wright said. But, he continued, it’s not about a particular tool or a protocol, “it’s about a concept, and the concept is about programmability of the network, and open source is a great way to help develop skills and advance the industry with a lot of collaborative effort.”
There was a confluence of events: taking some core tenets from research, creating open source projects for people to collaborate around and solve real engineering problems for themselves, Wright said. “To me it’s a little bit of the virtualization, a little bit of academic research coming together at just the right time and then accelerated with open source code that we can collaborate on.”
Today, many service providers are deploying CORD (Central Office Re-architected as a Datacenter) because operators want to rebuild the network edge because 5G is coming, Parulkar observed.
“Many operators want to [offer] gigabit-plus broadband access to their residential customers,” he said. “The central offices are very old and so building the new network edge is almost mandatory.” Ideally, they want to do it with new software defined networking, open source, desegregation and white boxes, he added.
The Next 10 Years
Looking ahead, the networking community “risks a bit of fragmentation as we will go off in different directions,’’ said McKeown. So he said it’s important to find a balance, and the common interest is in creating production quality software from ODL, ONS, CORD, and P4.
The overall picture is that “we’re trying to build next-generation networks,’’ said Wright. “What’s challenging for us as a broad industry is finding the best-of-breed ways to do that … so that we don’t create fragmentation. Part of that fragmentation is a lack of interoperability, but part of that fragmentation is just focus.”
There is still a way to go to realize the full potential of SDN, said Parulkar. But in 10 years’ time, predicted Wright, “SDN20 will be really an open source movement. I think SDN is about unlocking the potential of the network in the context of applications and users, not just the operators trying to connect … two different, separate end points.”
Wright suggested that audience members change their mindset and grow their skills, “because many of the operational practices that we see today in networks don’t translate into a software world where things move rapidly. We [need to] look at being able to make small, consistent, incremental changes rather than big bang, roll out changes. Getting involved and really being open to new techniques, new tools and new technologies … is how, together we can create the next generation. The new Internet.”
https://live-linux-foundation.pantheonsite.io/wp-content/uploads/lf_logo.svg00Esther Sheinhttps://live-linux-foundation.pantheonsite.io/wp-content/uploads/lf_logo.svgEsther Shein2018-05-14 07:00:132020-12-20 07:32:49The First 10 Years of Software Defined Networking
Vint Cerf, a “Father of the Internet,” spoke at the recent Open Networking Summit. Watch the complete presentation below.
The secret behind Internet protocol is that it has no idea what it’s carrying – it just a bag of bits going from point A to point B. So said Vint Cerf, vice president and chief internet evangelist at Google, speaking at the recent Open Networking Summit.
Cerf, who is generally acknowledged as a “Father of the Internet” said that one of the objectives of this project, which was turned on in 1983, was to explore the implications of open networking, including “open source, open standards and the process for which the standards were developed, open protocol architectures, which allowed for new protocols to be invented and inserted into this layered architecture.” This was important, he said, because people who wanted to do new things with the network were not constrained to its original design but could add functionality.
When he and Bob Kahn (co-creator for the TCP/IP protocol) were doing the original design, Cerf said, they hoped that this approach would lead to a kind of organic growth of the Internet, which is exactly what has been seen.
They also envisioned another kind of openness, that of open access to the resources of the network, where people were free both to access information or services and to inject their own information into the system. Cerf said they hoped that, by lowering the barriers to access this technology, they would open the floodgates for the sharing of content, and, again, that is exactly what happened.
There is, however, a side effect of reducing these barriers, which, Cerf said, we are living through today, which includes the proliferation of fake news, malware, and other malicious content. It has also created a set of interesting socioeconomic problems, one of which is dealing with content in a way that allows you decide which content to accept and which to reject, Cerf said. “This practice is called critical thinking, and we don’t do enough of it. It’s hard work, and it’s the price we pay for the open environment that we have collectively created.”
Cerf then shifted gears to talk about the properties of Internet design. “One of the most interesting things about the Internet architecture is the layering structure and the tremendous amount of attention being paid to interfaces between the layers,’’ he noted. There are two kinds: vertical interfaces and the end-to-end interactions that take place. Adoption of standardized protocols essentially creates a kind of interoperability among various components in the system, he said. “One interesting factor in the early Internet design is that each of the networks that made up the Internet, the mobile packet radio net, the packet satellite net, and the ARPANET, were very different inside,” with different addressing structures, data rates and latencies. Cerf said when he and Bob Kahn were trying to figure out how to make this look uniform, they concluded that “we should not try to change the networks themselves to know anything about the Internet.”
Instead, Cerf said, they decided the hosts would create Internet packets to say where things were supposed to go. They had the hosts take the Internet packets (which Cerf likened to postcards) and put them inside an envelope, which the network would understand how to route. The postcard inside the envelope would be routed through the networks and would eventually reach a gateway or destination host; there, the envelope would be opened and the postcard would be sent up a layer of protocol to the recipient or put into a new envelope and sent on.
“This encapsulation and decapsulation isolated the networks from each other, but the standard, the IP layer in particular, created compatibility, and it made these networks effectively interoperable, even though you couldn’t directly connect them together,’’ Cerf explained. Every time an interface or a boundary was created, the byproduct was “an opportunity for standardization, for the possibility of creating compatibility and interoperability among the components.” Now, routers can be disaggregated, such as in the example of creating a data plane and a control plane that are distinct and separate and then creating interfaces to those functions. Once we standardize those things, Cerf said, devices that exhibit the same interfaces can be used in a mix. He said we should “be looking now to other ways in which disaggregation and interface creation creates opportunities for us to build equipment” that can be deployed in a variety of ways.
Cerf said he likes the types of switches being built today – bare hardware with switching capabilities inside – that don’t do anything until they are told what to do, he said. “I have to admit to you that when I heard the term ‘software-defined network,’ my first reaction was ‘It’s a buzzword, it’s marketing,’ it’s always been about software.”
But, he continued, “I think that was an unfair and too shallow assessment.” His main interest in basic switching engines is that “they don’t do anything until we tell them what to do with the packets.”
Being able to describe the functionality of the switching system and how it should treat packets, if standardized, creates an opportunity to mix different switching systems in a common network, he said. As a result, “I think as you explore the possibilities of open networking and switching platforms, basic hardware switching platforms, you are creating some new opportunities for standardization.”
Some people feel that standards are stifling and rigid, Cerf noted. He said he could imagine situations where an over-dependence on standards creates an inability to move on, but standards also create commonality. “In some sense, by adopting standards, you avoid the need for hundreds, if not thousands of bilateral agreements of how you will make things work.” In the early days, as the Internet Engineering Task Force (IETF) was formed, Cerf said one of the philosophies they tried to adopt “was not to do the same thing” two or three different ways.
Openness of design allows for deep knowledge of how things work, Cerf said, which creates a lot of educated engineers and will be very helpful going forward. The ability to describe the functionality of a switching device, for example, “removes ambiguity from the functionality of the system. If you can literally compile the same program to run on multiple platforms, then you will have unambiguously described the functionality of each of those devices.”
This creates a uniformity that is very helpful when you’re trying to build a large and growing and complex system, Cerf said.
“There’s lots of competition in this field right now, and I think that’s healthy, but I hope that those of you who are feeling these competitive juices also keep in mind that by finding standards that create this commonality, that you will actually enrich the environment in which you’re selling into. You’ll be able to make products and services that will scale better than they might otherwise.”
Hear more insights from Vint Cerf in the complete presentation below:
https://live-linux-foundation.pantheonsite.io/wp-content/uploads/lf_logo.svg00Esther Sheinhttps://live-linux-foundation.pantheonsite.io/wp-content/uploads/lf_logo.svgEsther Shein2018-05-07 07:00:132020-12-20 07:33:05Vint Cerf on Open Networking and Design of the Internet
Based on feedback we received at Open Networking Summit North America 2018, our restructured agenda will include project-based technical sessions as well.
Share your knowledge with over 700 architects, developers, and thought leaders paving the future of network integration, acceleration and deployment. Proposals are due Sunday, June 24, 2018.
Networking Futures: Share innovative ideas and submissions that will disrupt and change the landscape of networking, as well as networking enabled markets, in the next 3-5 years. Submissions can be for Enterprise IT, Service Providers or Cloud Markets.
Network General Sessions: Common business, architecture, process or people issues that are important to move the Networking agenda forward in the next 1-2 years.
(Technical) Service Provider & Cloud Networking: We want to hear what you have to say about the containerization of service provider workloads, multi-cloud, 5G, fog, and edge access cloud networking.
(Business & Architecture) Service Provider & Cloud Networking: We’re seeking proposals on software-defined packet-optical, mobile edge computing, 4G video/CDN, 5G networking, and incorporating legacy systems (legacy enterprise workload migration, role of networking in cloud migration, and interworking of carrier OSS/BSS/FCAPS systems).
https://live-linux-foundation.pantheonsite.io/wp-content/uploads/lf_logo.svg00The Linux Foundationhttps://live-linux-foundation.pantheonsite.io/wp-content/uploads/lf_logo.svgThe Linux Foundation2018-04-30 06:30:322020-12-20 07:33:41Speak at Open Networking Summit Europe – Submit by June 24
By 2020, 50 billion devices will be online. That projection was made by researchers at Cisco, and it was a key point in Amber Case’s Embedded Linux Conference keynote address, titled “Calm Technology: Design for the Next 50 Years” which is now available for replay.
Case, Author and Fellow at Harvard University’s Berkman Klein Center, referred to the “Dystopian Kitchen of the Future” as she discussed so-called smart devices that are invading our homes and lives, when the way they are implemented is not always so smart. “Half of it is hackable,” she said. “I can imagine your teapot getting hacked and someone gets away with your password. All of this just increases the surface area for attack. I don’t know about you, but I don’t want to have to be a system administrator just to live in my own home.”
Support and Recede
Case also discussed the era of “interruptive technology.” “It’s not just that we are getting text messages and robotic notifications all the time, but we are dealing with bad battery life, disconnected networks and servers that go down,” she said. “How do we design technology for sub-optimal situations instead of the perfect situations that we design for in the lab?”
“What we need is calm technology,” she noted, “where the tech recedes into the background and supports us, amplifying our humanness. The only time a technology understands you the first time is in Star Trek or in films, where they can do 40 takes. Films have helped give us unrealistic expectations about how our technology understands us. We don’t even understand ourselves, not to mention the person standing next to us. How can technology understand us better than that?”
Case noted that the age of calm technology was referenced long ago at Xerox PARC, by early ubiquitous computing researchers, who paved the way for the Internet of Things (IoT). “What matters is not technology itself, but its relationship to us,” they wrote.
She cited this quote from Xerox researcher Mark Weiser: “A good tool is an invisible tool. By invisible, we mean that the tool does not intrude on your consciousness; you focus on the task, not the tool.”
Case supplied some ordered axioms for developing calm technology:
Technology shouldn’t require all of our attention, just some of it, and only when necessary.
Technology should empower the periphery.
Technology should inform and calm.
Technology should amplify the best of technology and the best of humanity.
Technology can communicate, but it doesn’t need to speak.
Technology should consider social norms.
The right amount of technology is the minimum amount to solve the problem.
In summing up, Case said that calm technology allows people to “accomplish the same goal with the least amount of mental cost.” In addition to her presentation at the Embedded Linux Conference, Case also maintains a website on calm technology, which offers related papers, exercises and more.
Building an open ecosystem and accelerating operational transformation is key to the open networking industry, says Huawei’s Bill Ren.
The 2018 Open Networking Summit (ONS) is almost here. We spoke to Bill Ren, Vice President Network Industry & Ecosystem Development at Huawei recently to glean some insights on ONAP since Huawei is a founding member and top contributor to this project.
“SDN/NFV solutions have been in the market for many years but we did not see massive deployment due to lack of working standards and automation,” Bill said.
Bill Ren, VP, Network Industry & Ecosystem Development, Huawei Technologies Co., Ltd.
“We believe open source will help produce de facto standards faster. We need to bring automation and intelligence into networking, we need a full end to end automation platform and that is why ONAP is particularly important for networking.”
Here is what Bill had to say about the ONAP’s growing role in open networking.
Linux.com: How does adopting ONAP as a standard help all operators and vendors to innovate?
Bill: ONAP can help to set up a common framework for all operators as an onboarding resource, or to design and deploy service, manage and control the network, collect data from networks, and manage policy. Adopting ONAP as a standard means that operators can focus on service innovation rather than on the software platform itself. And, vendors can focus on innovation as ONAP removes the difficulty of OSS integration and brings an open unified marketplace for all vendors.
Linux.com: Huawei leads five of 28 ONAP projects, including SO, VNF SDK, Modeling, Integration and ONAP CLI. Why did Huawei choose those projects? What benefits do you see in those projects?
Bill: Huawei treats open source as a strategic tool to build a healthy telecom industry and we set up a dedicated management team for networking open source projects like ONAP. We chose to lead some of these projects because they are key elements in building a healthy ecosystem. Take modeling for example. Modeling aims to build common information model for network resource and service across the whole industry. This will result in simple and quick resource onboarding and OSS/BSS integration. VNF SDK aims to build common VNF packaging and marketplace. Integration aims to support multi-cloud and multi-vendor environments. SO is the core component in ONAP that links other components so that they work together.
Huawei also chose to lead these key projects because we, as an end-to-end telecom solution leader, have the necessary resources, expertise and experience to significantly contribute. For example, we can involve our global expertise in SDOs for modelling project. And we can involve our key customer to discuss use case, requirements and POC/trials. Huawei believes an open healthy ecosystem will enlarge the total market and ultimately benefit Huawei’s business.
Linux.com: What benefits do you see in being involved in the ONAP community?
Bill: We learned a lot. ONAP brings really good architecture for network automation and this will benefit our related products. ONAP brings operator and vendor together and this will help us to understand requirements much better. ONAP will even bring a chance to try some new business model in certain area like service or cloudification. I believe we will see more and more benefits over time. I believe we will see more and more benefits over time.
Linux.com: Your keynote at Open Networking Summit is “Make Infrastructure Relevant to a Better Future.” Explain that please. What has Huawei done along these lines and how well is it working?
Bill: Yes. Building an open ecosystem and accelerating operational transformation is our industry strategy. Infrastructure operators need operational transformation to be more deeply relevant to a better digital intelligent society. And open source is the strategy tool for that. My keynote at ONS will address this point.
Basically, we believe all partners in our industry, including SDOs and open source projects, operators and vendors can work together to build an open and intent-driven cloud-friendly network to empower the digital life and vertical digitalization. I am happy to see that most network related open source projects are now merged into Linux Foundation Networking (LFN) umbrella and SDOs like MEF/TMF are cooperating with LFN. I would say it moves on the right direction.
Linux.com: What are your thoughts on the Linux Foundation Networking umbrella overall?
Bill: I look forward to LFN speeding the building of the open source networking ecosystem, and Telco operation transformation. I would like to see LFN work out a clear technical vision, flexible full stack architecture, cross-domain common models, harmonized SDO cooperation and faster production and field trials. I recommend LFN set up strong Technical Advisory Committee (TAC) team, unified use case committee, and unified verification programs. I believe our industry has found a better way to work together and I look forward to another quick change and successful year for our industry.
This article was sponsored by Huawei and written by Linux.com.