Posts

open source summit

Submit your proposal to speak at Open Source Summit Europe in Edinburgh. Proposals due July 1.

Share your expertise and speak at Open Source Summit Europe in Edinburgh, October 22 – 24, 2018. We are accepting proposals through Sunday, July 1, 2018.

Open Source Summit Europe is the leading technical conference for professional open source. Join developers, sysadmins, DevOps professionals, architects and community members, to collaborate and learn about the latest open source technologies, and to gain a competitive advantage by using innovative open solutions.

As open source continues to evolve, so does the content covered at Open Source Summit. We’re excited to announce all-new tracks and content that make our conference more inclusive and feature a broader range of technologies driving open source innovation today.

This year’s tracks/content will cover the following:

  • Cloud Native Apps/Serverless/Microservices
  • Infrastructure & Automation (Cloud/Cloud Native/DevOps)
  • Artificial Intelligence & Data Analytics
  • Linux Systems
  • Open Collaboration and Diversity Empowerment
  • Emerging Open Technologies/Wildcard
  • Innovation at Apache
  • TODO/Open Source Program Management

View Full List of Suggested Topics & Submit Now >>

Get Inspired!

Watch presentations from Open Source Summit Europe 2017.

View All Open Source Summit 2017 Keynotes >>

 

software defined networking

Wendy Cartee, Nick McKeown, Guru Parulkar, and Chris Wright discuss the first 10 years of software defined networking at Open Networking Summit North America.

In 2008, if you wanted to build a network, you had to build it from the same switch and router equipment that everyone else had, according to Nick McKeown, co-founder of Barefoot Networks, speaking as part of a panel of networking experts at Open Networking Summit North America

Equipment was closed, proprietary, and vertically integrated with features already baked in, McKeown noted. And, “network management was a dirty word. If you wanted to manage a network of switches, you had to write your own scripts over a lousy, cruddy CLI, and everybody had their own way of doing it in order to try to make their network different from everybody else’s.”

All this changed when Stanford University Ph.D. student Martin Casado had the bold idea to rebuild the Stanford network out of custom-built switches and access points, he said.

Separate Planes

“Martin just simply showed that if you lift the control up and out of the switches, up into servers, you could replace the 2,000 CPUs with one CPU centrally managed and it would perform exactly how you wanted, could administered by about 10 people instead of 200. And you could implement the policies of a large institution directly in one place, centrally administered,” said McKeown.

That led to the birth of The Clean Slate program and, shortly afterward, Kate Green from MIT Technology Review coined the term Software Defined Networking (SDN), he said.

“What seemed like a very simple idea, to just separate the control plane from the forwarding plane, define a protocol that is OpenFlow, and enable the research community to build new capabilities and functionality on top of that control plane … caught the attention of the research community and made it very, very easy for them to innovate,’’ said Guru Parulkar, executive director of the Open Networking Foundation.

On the heels of that came the idea of slicing a production network using OpenFlow and a simple piece of software, he said. In one slice you could run a production network, and in another slice you could run an experimental network and show the new capabilities.

The notion of the segregating of the control plane and the data plane brought about a whole new way of doing networking as it became open, along with the intersection of open source and SDN, noted moderator Wendy Cartee, senior director of marketing, Cloud Native Applications, at VMware.

“Building all of this new virtualization technology and bringing it into enterprises and to the world at large, created a need for a type of network programmability” that was happening as the same time as the research, noted Chris Wright, vice president and CTO, at Red Hat. That brought about open source tools like Open vSwitch, “so we could build a type of network topology that we needed in virtualization.”

Confluence of Events

In the beginning, there was much hype about SDN and desegregation and OpenFlow, Wright said. But, he continued, it’s not about a particular tool or a protocol, “it’s about a concept, and the concept is about programmability of the network, and open source is a great way to help develop skills and advance the industry with a lot of collaborative effort.”

There was a confluence of events: taking some core tenets from research, creating open source projects for people to collaborate around and solve real engineering problems for themselves, Wright said. “To me it’s a little bit of the virtualization, a little bit of academic research coming together at just the right time and then accelerated with open source code that we can collaborate on.”

Today, many service providers are deploying CORD (Central Office Re-architected as a Datacenter) because operators want to rebuild the network edge because 5G is coming, Parulkar observed.

“Many operators want to [offer] gigabit-plus broadband access to their residential customers,” he said. “The central offices are very old and so building the new network edge is almost mandatory.” Ideally, they want to do it with new software defined networking, open source, desegregation and white boxes, he added.

The Next 10 Years

Looking ahead, the networking community “risks a bit of fragmentation as we will go off in different directions,’’ said McKeown. So he said it’s important to find a balance, and the common interest is in creating production quality software from ODL, ONS, CORD, and P4.

The overall picture is that “we’re trying to build next-generation networks,’’ said Wright. “What’s challenging for us as a broad industry is finding the best-of-breed ways to do that … so that we don’t create fragmentation. Part of that fragmentation is a lack of interoperability, but part of that fragmentation is just focus.”

There is still a way to go to realize the full potential of SDN, said Parulkar. But in 10 years’ time, predicted Wright, “SDN20 will be really an open source movement. I think SDN is about unlocking the potential of the network in the context of applications and users, not just the operators trying to connect … two different, separate end points.”

Wright suggested that audience members change their mindset and grow their skills, “because many of the operational practices that we see today in networks don’t translate into a software world where things move rapidly. We [need to] look at being able to make small, consistent, incremental changes rather than big bang, roll out changes. Getting involved and really being open to new techniques, new tools and new technologies … is how, together we can create the next generation. The new Internet.”

Vint Cerf

Vint Cerf, a “Father of the Internet,” spoke at the recent Open Networking Summit. Watch the complete presentation below.

The secret behind Internet protocol is that it has no idea what it’s carrying – it just a bag of bits going from point A to point B. So said Vint Cerf, vice president and chief internet evangelist at Google, speaking at the recent Open Networking Summit.

Cerf, who is generally acknowledged as a “Father of the Internet” said that one of the objectives of this project, which was turned on in 1983, was to explore the implications of open networking, including “open source, open standards and the process for which the standards were developed, open protocol architectures, which allowed for new protocols to be invented and inserted into this layered architecture.” This was important, he said, because people who wanted to do new things with the network were not constrained to its original design but could add functionality.

Open Access

When he and Bob Kahn (co-creator for the TCP/IP protocol) were doing the original design, Cerf said, they hoped that this approach would lead to a kind of organic growth of the Internet, which is exactly what has been seen.  

They also envisioned another kind of openness, that of open access to the resources of the network, where people were free both to access information or services and to inject their own information into the system. Cerf said they hoped that, by lowering the barriers to access this technology, they would open the floodgates for the sharing of content, and, again, that is exactly what happened.

There is, however, a side effect of reducing these barriers, which, Cerf said, we are living through today, which includes the proliferation of fake news, malware, and other malicious content. It has also created a set of interesting socioeconomic problems, one of which is dealing with content in a way that allows you decide which content to accept and which to reject, Cerf said. “This practice is called critical thinking, and we don’t do enough of it. It’s hard work, and it’s the price we pay for the open environment that we have collectively created.”

Internet Architecture

Cerf then shifted gears to talk about the properties of Internet design. “One of the most interesting things about the Internet architecture is the layering structure and the tremendous amount of attention being paid to interfaces between the layers,’’ he noted. There are two kinds: vertical interfaces and the end-to-end interactions that take place. Adoption of standardized protocols essentially creates a kind of interoperability among various components in the system, he said.

“One interesting factor in the early Internet design is that each of the networks that made up the Internet, the mobile packet radio net, the packet satellite net, and the ARPANET, were very different inside,” with different addressing structures, data rates and latencies. Cerf said when he and Bob Kahn were trying to figure out how to make this look uniform, they concluded that “we should not try to change the networks themselves to know anything about the Internet.”

Instead, Cerf said, they decided the hosts would create Internet packets to say where things were supposed to go. They had the hosts take the Internet packets (which Cerf likened to postcards) and put them inside an envelope, which the network would understand how to route. The postcard inside the envelope would be routed through the networks and would eventually reach a gateway or destination host; there, the envelope would be opened and the postcard would be sent up a layer of protocol to the recipient or put into a new envelope and sent on.

“This encapsulation and decapsulation isolated the networks from each other, but the standard, the IP layer in particular, created compatibility, and it made these networks effectively interoperable, even though you couldn’t directly connect them together,’’ Cerf explained. Every time an interface or a boundary was created, the byproduct was “an opportunity for standardization, for the possibility of creating compatibility and interoperability among the components.”

Now, routers can be disaggregated, such as in the example of creating a data plane and a control plane that are distinct and separate and then creating interfaces to those functions. Once we standardize those things, Cerf said, devices that exhibit the same interfaces can be used in a mix. He said we should “be looking now to other ways in which disaggregation and interface creation creates opportunities for us to build equipment” that can be deployed in a variety of ways.

Cerf said he likes the types of switches being built today – bare hardware with switching capabilities inside – that don’t do anything until they are told what to do, he said. “I have to admit to you that when I heard the term ‘software-defined network,’ my first reaction was ‘It’s a buzzword, it’s marketing,’ it’s always been about software.”

But, he continued, “I think that was an unfair and too shallow assessment.” His main interest in basic switching engines is that “they don’t do anything until we tell them what to do with the packets.”

Adopting Standards

Being able to describe the functionality of the switching system and how it should treat packets, if standardized, creates an opportunity to mix different switching systems in a common network, he said. As a result, “I think as you explore the possibilities of open networking and switching platforms, basic hardware switching platforms, you are creating some new opportunities for standardization.”

Some people feel that standards are stifling and rigid, Cerf noted. He said he could imagine situations where an over-dependence on standards creates an inability to move on, but standards also create commonality. “In some sense, by adopting standards, you avoid the need for hundreds, if not thousands of bilateral agreements of how you will make things work.”

In the early days, as the Internet Engineering Task Force (IETF) was formed, Cerf said one of the philosophies they tried to adopt “was not to do the same thing” two or three different ways.

Deep Knowledge

Openness of design allows for deep knowledge of how things work, Cerf said, which creates a lot of educated engineers and will be very helpful going forward. The ability to describe the functionality of a switching device, for example, “removes ambiguity from the functionality of the system. If you can literally compile the same program to run on multiple platforms, then you will have unambiguously described the functionality of each of those devices.”

This creates a uniformity that is very helpful when you’re trying to build a large and growing and complex system, Cerf said.

“There’s lots of competition in this field right now, and I think that’s healthy, but I hope that those of you who are feeling these competitive juices also keep in mind that by finding standards that create this commonality, that you will actually enrich the environment in which you’re selling into. You’ll be able to make products and services that will scale better than they might otherwise.”

Hear more insights from Vint Cerf in the complete presentation below:

LinuxCon

Check out the new keynote speakers and executive leadership track for LC3.

Attend LC3 in Beijing, June 25 – 27, 2018, and hear from Chinese and international open source experts from Accenture, China Mobile, Constellation Research, Huawei, IBM, Intel, OFO, Xturing Biotechnology and more.

New Keynote Speakers:

  • Peixin Hou, Chief Architect of Open Software and Systems in the Central Software Institute, Huawei
  • Sven Loberg, Managing Director within Accenture’s Emerging Technology practice with responsibility for Open Source and Software Innovation
  • Evan Xiao, Vice President, Strategy & Industry Development, Huawei
  • Cloud Native Computing Panel Discussion featuring panelists from Alibaba, Huawei, IBM, Microsoft and Tencent, and hosted by Dan Kohn, Executive Director, Cloud Native Computing Foundation

View Previously Announced Keynote Speakers>>

New Executive Leadership Track:

In addition to existing tracks across technology areas including AI, Blockchain, Networking, Cloud Native and more, LC3 2018 will feature a new Executive Leadership track on Tuesday, June 26, 2018, targeted at gathering executive business leaders across Chinese technology companies to collaborate, to share learnings, and to gain insights from industry leaders including:

  • R “Ray” Wang, head of Silicon Valley-based Constellation Research and best selling author of the Harvard Business Review Press book, Disrupting Digital Business, will share practical guidance on how to jump start growth with AI driven smart services
  • Dr. Feng Junlan, Director of the newly founded China Mobile Artificial Intelligence and Smart Operations R&D Center, will share insights on network intelligence, intelligent operations and China Mobile’s related strategic considerations and practice
  • Chao Wang, CTO of Xturing Biotechnology will talk about building Gene Sequencing tools by using container technology
  • Chenyu Xue, M2M Director of OFO will discuss the sharing economy how OFO implements an open source spirit into its company philosophy
  • Deep Learning Panel Discussion featuring panelists from Baidu, Didi, Huawei, IBM, Microsoft and Tencent, and hosted by Jim Zemlin, Executive Director, The Linux Foundation

These sessions will take place following the morning keynote sessions including Sven Loberg, Accenture; Evan Xiao, Huawei; and the Cloud Native Panel Discussion.

VIEW THE FULL SCHEDULE >>

REGISTER NOW >>

Need assistance convincing your manager? Here’s a letter that can help you make the request to attend LC3. Register now to save $40USD/255RMB through June 18.

参加6月25日–27日在北京召开的LC3论坛,倾听来自埃森哲、中国移动、卫星网研究、华为、IBM、英特尔、OFO、Xturing Biotechnology等中国和国际公司的开源专家的意见和建议。

新主题发言人:

  • 侯培新,中央软件研究院开源软件与系统首席架构师,华为
  • Sven Loberg,埃森哲新兴技术实践总监,负责开源和软件创新
  • Evan Xiao,战略与行业发展部门副总裁,华为
  • 云原生计算小组讨论,由来自阿里巴巴、华为、IBM、微软和腾讯的专题讨论嘉宾组成,由云本地计算基金会的执行董事 Dan Kohn 主持

查看之前公布的主题演讲者 »

新执行领导力会议:

涵盖了人工智能、区块链、网络、云原生等技术领域的现有通道之外,LC3 2018论坛将于6月26日(周二)推出一项新的高管领导力会议,旨在汇聚中国科技公司的高管业务领导者,共同分享学习经验,并分享行业领导者的见解,此会议将包括:

  • 卫星网研究(硅谷)的负责人和畅销书作者(书籍《混乱的数字化商业》,哈佛商业评论杂志社出版) R “Ray” Wang ,将分享“如何通过人工智能驱动的智能服务推动增长”的实践指导
  • 新成立的中国移动人工智能和智能运营研发中心主任冯俊兰博士,将分享“关于网络智能、智能运营和中国移动相关战略考虑与实践”的见解
  • Xturing Biotechnology的首席技术官王朝将谈谈使用容器技术构建的基因测序工具
  • OFO的M2M总监薛晨宇将探讨共享经济——“OFO 如何在公司理念中实现开源精神”
  • 深度学习小组讨论会的嘉宾来自百度,滴滴,华为,IBM,微软和腾讯,并由Linux Foundation执行总监Jim Zemlin主持

以上会议将在上午的主题演讲后举行,将包括埃森哲的Sven Loberg,、华为的Evan Xiao,以及云本地小组讨论嘉宾。

查看完整的时间表 »

立即注册 »

需要我们帮助说服您的经理?这是一封信,用以帮助您向经理提出参加 LC3 的要求。至 6 月 18 日前,立即注册可省 40 美元/255 元人民币。

maintainer

At Embedded Linux Conference, Sony’s Tim Bird discussed some of the challenges faced by maintainers of open source projects.

What are some of the challenges open source project maintainers face? One common issue is “The Maintainer’s Paradox,” which refers to the fact that open source maintainers are presented with more ideas along with more challenges as their communities grow. This occurs even when they take very minor patches from contributors. This topic was recently tackled by Tim Bird, Senior Software Engineer at Sony, in a keynote address at the Embedded Linux Conference.

The Maintainer’s Paradox is referenced in Eric Raymond’s seminal work “The Cathedral and the Bazaar,” and Bird opened his keynote address by citing the reference. “Raymond said that with enough eyeballs, all bugs are shallow,” Bird noted, adding that the reference applies to large open source communities.

Diversity of thought

“When I do training at Sony, I use a light bulb metaphor for this,” he said. “If you have five or 10 light bulbs that are similar to each other and you turn them on, there will be some good ideas represented by those light bulbs. But if you have a thousand light bulbs of different shapes and sizes, it’s more likely that there are going to be thousands of good ideas represented. So there are probabilities involved here. It’s the diversity of thought that is important. Diversity has a lot of upside.”

“Of course diversity has costs,” he added. “It takes time to assimilate different ideas and integrate them into the existing code path.”

Bird is the maintainer of the Fuego test system, which provides a framework for testing embedded Linux. During his keynote, he provided examples of challenges that maintainers face,  within the context of maintaining Fuego.

Tread carefully

“I learned things becoming a maintainer,” he said. “The Maintainer’s Paradox is that the maintainer is really excited about new contributions, but there is also fear and trepidation. Sometimes when I see a patch set on the mailing list I say, ‘Oh no, another patch set.’ I just might not have time to look at it. You want to review patches carefully and give appropriate feedback, but being a maintainer is sometimes overwhelming.”

Bird displayed a large photo of a puppy as he said: “Every time you get a patch that implies a new feature branch, that is something that has to be cared for indefinitely. As a maintainer, your incentive can be to not take too many of these things.”

Bird also noted some important social dynamics involved with how maintainers interact with community members. For example, differing personalities can create challenges. “People can get frustrated, and there can be miscommunications.” Additionally, although many maintainers want to reward contributions on a meritocracy basis, it can be difficult to achieve that goal.

What are Bird’s recommendations for optimizing tasks and communications? He supplied the following tips:

  • Call out negative communication
  • Route around offenders
  • Listen carefully, actively clarify and act on feedback
  • Assist by helping others
  • Become a maintainer

Finally, for more on active management of open source projects, including free tools, check this post.

Watch the entire presentation below:

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.

Linux kernel

Get insights from Jon Corbet on the state of Linux kernel development.

At the recent Embedded Linux Conference + OpenIoT Summit, I sat down with Jonathan Corbet, the founder and editor-in-chief of LWN to discuss a wide range of topics, including the annual Linux kernel report.

The annual Linux Kernel Development Report, released by The Linux Foundation is the evolution of work Corbet and Greg Kroah-Hartman had been doing independently for years. The goal of the report is to document various facets of kernel development, such as who is doing the work, what is the pace of the work, and which companies are supporting the work.

Linux kernel contributors

To learn more about the companies supporting Linux kernel development in particular, Corbet wrote a set of scripts with the release of kernel 2.6.20, to pull the information out of the kernel repository. That information helped Corbet associate contributions with employers, whenever possible.

When Corbet published a report based on these findings in LWN, it created a bit of a stir. “It was a surprise to everyone, including me, because there was still this image of free software in general and Linux in particular as being something produced by kids who haven’t moved out of their parents basements,” said Corbet.

He found that more than 70 percent of the code going into the kernel was coming from professional developers who were getting paid to do that work. “Since then things have changed and our numbers have gotten better. Today, over 90 percent of the code is coming from professional developers who are employed by some company to work on the kernel,” he said.

Corbet has been involved with the Linux kernel from a very early stage, so connecting the dots was not too difficult, even though not all developers use official company email accounts,

“In most cases, we know who is working for which company. Sometimes people contact us and say that their employer wants to ensure that they do get credit for the work they are doing in the kernel. Sometimes we just ask who they are working for,” said Corbet.

Corbet not only gathers valuable data about the Linux kernel, he also analyzes the data to see some patterns and trends. The biggest trend, over the years, has been a decline in the number of contributions coming from volunteers, which has decreased from 15 percent to 6 percent since the 2.6.20 release.

“There are times when we have worried about it because volunteers are often the people who are in the next round of paid developers. That’s often how you get into the community — by doing a little bit of stuff on your own time,” he said. Corbet did a bit of digging to see the employment status of people when their very first patch merged and their latest status. He found that at this point most of those people were already working for some company.

While it’s true there are fewer volunteer developers now, it could also be said that people don’t remain volunteers for very long because when their code gets merged into the kernel, companies tend to approach these developers and offer jobs. So, if your code shows up in the kernel, that’s a good resume to have.

What keeps Corbet awake at night

There has been a growing concern of late that the Linux kernel community is getting older. Looking at the top maintainers, for example, you can see a lot of people who have been involved since the 1990s.

“The upper cadre is definitely getting a little bit older, a little bit grayer. There is some truth to that and I think the concerns of that are not entirely overblown,” said Corbet. “A whole bunch of us managed to stumble into something good back in the ’90s, and we have stuck with it ever since because it’s been a great ride.”

That doesn’t mean new people are not coming in. A new kernel is released every 9 to 10 weeks. And, every new release sees contributions from more than 200 developers submitting their very first patch.

“We are bringing a lot of new people into the community,” Corbet said. “Maybe half of those 200 contributors will never contribute anything again. They had one thing they wanted to fix and then they moved on. But there are a lot many others who stick around and become long-term members of the community. Some of these worked their way into the subsystem maintainer positions. They will be replacing the older members as they retire.”

Corbet is not at all worried about the aging community as it has evolved into an “organic” body with continuous flow of fresh blood. It’s true that becoming a kernel developer is more demanding; you do have to work your way into it a little bit, but plenty of people are doing it.

“I’m not really worried about the future of our community because we are doing so well at attracting bright new developers,” said Corbet, “We have an influx rate that any other project would just love to have.”

However, he did admit that the community is showing increasing signs of stress at the maintainer level. “The number of maintainers is not scaling with a number of developers,” he said. However, he said, this problem is not unique to the kernel community; the whole free software community is facing this challenge.

Another concern for Corbet is the emergence of other kernels, such as Google’s Fuchsia. These kernels are being developed specifically to be permissively licensed, which allows them to  be controlled by one or a very small number of companies. “Some of those kernels could push Linux aside in various subfields,” said Corbet. “I think some of the corporate community members have lost sight of what made Linux great and so successful. It could be useful for some companies in the short term, but I don’t think it’s going to be a good thing for anyone in the long term.”

Core needs

Corbet also noted another worrisome trend. Although many companies contribute to every kernel release, if you look closely you will see that a lot of these contributions are toward making their own hardware work great with Linux.

“It’s a great thing. We have been asking them to do it for years, but there is a whole lot of the kernel that everyone needs,” he said. There is the memory management subsystem. There’s the virtual filesystem layer. There are components of the kernel that are not tied to any single company’s hardware, and it’s harder to find companies willing to support them.

“Some of the companies that contribute to the most code to the kernel do not contribute to the core kernel at all,” said Corbet.

Corbet also worries about the lack of quality documentation and has himself initiated some efforts to improve the situation. “Nobody wants to pay for documentation,” he said. “There is nobody whose job it is to write documentation for the kernel, and it really shows in the quality. So, some of those areas I think are really going to hurt us going forward. We need to get better investment there.”

You can hear more from Jon Corbet, including insights on the recent Spectre and Meltdown issues, in his presentation from Embedded Linux Conference:

You can learn more about the Linux kernel development process in the complete annual report. Download the 2017 Linux Kernel Development Report now.

OS Summit

See schedule highlights for Automotive Linux Summit and Open Source Summit Japan in Tokyo, June 20-22.

Attend Automotive Linux Summit and Open Source Summit Japan in Tokyo, June 20 – 22, for three days of open source education and collaboration.

Automotive Linux Summit connects those driving innovation in automotive Linux from the developer community, with the vendors and users providing and using the code, in order to propel the future of embedded devices in the automotive arena.

Open Source Summit Japan is the leading conference in Japan connecting the open source ecosystem under one roof, providing a forum for technologists and open source industry leaders to collaborate and share information, learn about the latest in open source technologies and find out how to gain a competitive advantage by using innovative, open solutions. The event covers cornerstone open source technology areas such as Linux, cloud infrastructure, and cloud native applications and explores the newest trends including networking, blockchain, serverless, edge computing and AI. It also offers an open source leadership track covering compliance, governance and community.

Session highlights for Automotive Linux Summit:

  • Enabling Hardware Configuration Flexibility Keeping a Unified Software – Dominig ar Foll, Intel
  • Beyond the AGL Virtualization Architecture – AGL Virtualization Expert Group (EG-VIRT) – Michele Paolino, Virtual Open Systems
  • High-level API for Smartphone Connectivity on AGL – Takeshi Kanemoto, RealVNC Ltd.
  • AGL Development Tools – What’s New in FF? – Stephane Desneux, IoT.bzh

Session highlights for Open Source Summit Japan:

  • Building the Next Generation of IoT Applications – Dave Chen, GE Digital
  • Use Cases for Permissioned Blockchain Platforms – Swetha Repakula & Jay Guo, IBM
  • Using Linux for Long Term – Community Status and the Way We Go  – Tsugikazu Shibata, NEC
  • Hitchhiker’s Guide to Machine Learning with Kubernetes – Vishnu Kannan, Google
  • OSS Vulnerability Trends and PoC 2017-2018  – Kazuki Omo, SIOS Technology, Inc.
  • Microservices, Service Mesh, and CI/CD Pipelines – Making It All Work Together  – Brian Redmond, Microsoft

View the Full Schedule >>

Register now and save $175 through April 28!

Register Now>>

Note: One registration gets you access to both Automotive Linux Summit and Open Source Summit Japan.

Linux Foundation members and LF project members receive an additional 20% discount off current registration pricing, and academic, student, non-profit, and community discounts are available as well. Email events@linuxfoundation.org to receive your discount code.

Applications for diversity and needs-based scholarships are also being accepted. Get information on eligibility and how to apply.

Open Data

NOAA is working to make all of its data available to an even wider group of people and make it more easily understood (Image: NOAA).

The goal of the National Oceanic and Atmospheric Administration (NOAA) is to put all of its data — data about weather, climate, ocean coasts, fisheries, and ecosystems – into the hands of the people who need it most. The trick is translating the hard data and making it useful to people who aren’t necessarily subject matter experts, said Edward Kearns, the NOAA’s first ever data officer, speaking at the recent Open Source Leadership Summit (OSLS).  

NOAA’s mission is similar to NASA’s in that it is science based, but “our mission is operations; to get the quality information to the American people that they need to run their businesses, to protect their lives and property, to manage their water resources, to manage their ocean resources,” said Kearns, during his talk titled “Realizing the Full Potential of NOAA’s Open Data.”

He said that NOAA was doing Big Data long before the term was coined and that the agency has way too much of it – to the tune of 30 petabytes in its archives with another 200 petabytes of data in a working data store. Not surprisingly, NOAA officials have a hard time moving it around and managing it, Kearns said.

Data Sharing

NOAA is a big consumer of open source and sharing everything openly is part of the organization’s modus operandi. On a global level, “the agency has been a leader for the entire United States in trying to broker data sharing among countries,” Kearns said. One of the most successful examples has been through the United Nations, with an organization called World Meteorological Organization (WMO).

Agency officials have a tendency to default making their products accessible in the public domain, something Kearns said he’d like to change. By adopting some modern licensing practices, he believes the NOAA could actually share even more information with the public. “The Linux Foundation has made progress on the community data license agreement. This is one the things I’d like to possibly consider adopting for our organization,’’ he added.

One of the great success stories the NOAA has in terms of getting critical data to the public was after Hurricane Irma hit Florida in September 2017, he said.

“As you can imagine, there were a lot of American citizens that were hungry for information and were hitting the NOAA websites very hard and data sites very hard,’’ he said. “Typically, we have a hard time keeping up with that kind of demand.” The National Hurricane Center is part of the NOAA, and the agency took the NHC’s website and put it on Amazon Cloud.

This gave the agency the ability to handle over a billion hits a day during the peak hurricane season. But, he continued, “we are still … just starting to get into how to adopt some of these more modern technologies to do our job better.”

Equal Access

Now the NOAA is looking to find a way to make the data available to an even wider group of people and make it more easily understood. Those are their two biggest challenges: how to disseminate data and how to help people understand it, Kearns said.

“We’re getting hammered every day by a lot of companies that want the data… and we have to make sure everybody’s got an equal chance of getting the data,” he said.

This is becoming a harder job because demand is growing exponentially, he said. “Our costs are going up because we need more servers, we need more networks,” and it’s a problem due to budget constraints.

The agency decided that partnering with industry would help facilitate the delivery of data.

The NOAA is going into the fourth year of a deal it signed with Amazon, Microsoft, IBM, Google, and a nonprofit out of the University of the Chicago called the Open Commons Consortium (OCC), Kearns said. The agreement is that NOAA data will remain free and open and the OCC will host it at no cost to taxpayers and monetize services around the data.

The agency is using an academic partner acting as a data broker to help it “flip this data and figure out how to drop it into all of our collaborators’ cloud platforms, and they turn it around and serve many consumers from that,” Kearns explained. “We went from a one-to-many, to a one-to-a-few, to a many model of distribution.”

People trust NOAA’s data today because they get it from a NOAA data service, he said. Now the agency is asking them to trust the NOAA data that exists outside the federal system on a partner system.

On AWS alone the NOAA has seen an improvement of over two times the number of people who are using the data, he said. The agency in turn, has seen a 50 percent reduction in hits on the NOAA servers.

Google has loaded a lot of the agency’s climate data to its BigQuery data warehouse, “and they’ve been able to move petabytes of this data just in a few months, just because the data now has been loaded into a tool people are already using.”

This “reduces that obstacle of understanding,’’ Kearns noted. “You don’t have to understand a scientific data format, you can go right into BigQuery… and do analyses.”

Data Trust

Being able to trust data is also an important component of any shared initiative, and through the NOAA’s Big Data Project, the agency is seeking ways of ensuring that the trust that comes with the NOAA brand is conveyed with the data, he said, so people continue to trust it as they use it.  

“We have a very proud history of this open data leadership, we’re continuing on that path, and we’re trying to see how we can amplify that,’’ Kearns said.

NOAA officials are now wondering if the data is being made available through these modern cloud platforms will make it easier for users to create information products for themselves and their customers.

“Of course, we’re also looking for other ways of just doing our business better,’’ he added. But they want to figure out if it makes sense to continue this experiment with its partners. That, he said, they will likely know by early next year.

Watch the complete presentation below:

OS Summit Japan

The first round of keynotes have been announced for Automotive Linux Summit and Open Source Summit, happening in Tokyo June 20-22.

The first round of keynotes for Automotive Linux Summit & Open Source Summit Japan have been announced. Join us June 20 – 22, 2018 in Tokyo to hear from:

  • Brian Behlendorf, Executive Director, Hyperledger
  • Dan Cauchy, Executive Director, Automotive Grade Linux
  • Seiji Goto, Manager of IVI Advanced Development, Mazda Motor Corporation
  • Mitchell Hashimoto, Founder & CTO, HashiCorp
  • Kelsey Hightower, Developer Advocate, Google
  • Greg Kroah-Hartman, Linux Kernel Maintainer
  • Ken-ichi Murata, Group Manager/Project General Manager, Connected Strategy & Planning Group, Connected Company, and Masato Endo, Program Manager, Connected Vehicle Group, Intellectual Property Division, Toyota Motor Corporation
  • Michelle Noorali, Senior Software Engineer, Microsoft
  • Linus Torvalds, Creator of Linux & Git, in conversation with Dirk Hohndel, VP & Chief Open Source Officer, VMware
  • Jim Zemlin, Executive Director, The Linux Foundation

Automotive Linux Summit connects those driving innovation in automotive Linux from the developer community with the vendors and users providing and using the code, in order to propel the future of embedded devices in the automotive arena.

Open Source Summit Japan is the leading conference in Japan connecting the open source ecosystem under one roof, providing a forum for technologists and open source industry leaders to collaborate and share information, learn about the latest in open source technologies and find out how to gain a competitive advantage by using innovative, open solutions. The event covers cornerstone open source technology areas such as Linux, cloud infrastructure, and cloud native applications and explores the newest trends including networking, blockchain, serverless, edge computing and AI. It also offers an open source leadership track covering compliance, governance and community.

The full Automotive Linux Summit and Open Source Summit Japan schedules will be published next week. Register now to save $175 through April 28.

REGISTER NOW >>

Note: One registration gets you access to both Automotive Linux Summit and Open Source Summit Japan.

Calm technology

By 2020, 50 billion devices will be online. That projection was made by researchers at Cisco, and it was a key point in Amber Case’s Embedded Linux Conference keynote address, titled “Calm Technology: Design for the Next 50 Years” which is now available for replay.

Case, Author and Fellow at Harvard University’s Berkman Klein Center, referred to the “Dystopian Kitchen of the Future” as she discussed so-called smart devices that are invading our homes and lives, when the way they are implemented is not always so smart. “Half of it is hackable,” she said. “I can imagine your teapot getting hacked and someone gets away with your password. All of this just increases the surface area for attack. I don’t know about you, but I don’t want to have to be a system administrator just to live in my own home.”

Support and Recede

Case also discussed the era of “interruptive technology.” “It’s not just that we are getting text messages and robotic notifications all the time, but we are dealing with bad battery life, disconnected networks and servers that go down,” she said. “How do we design technology for sub-optimal situations instead of the perfect situations that we design for in the lab?”

“What we need is calm technology,” she noted, “where the tech recedes into the background and supports us, amplifying our humanness. The only time a technology understands you the first time is in Star Trek or in films, where they can do 40 takes. Films have helped give us unrealistic expectations about how our technology understands us. We don’t even understand ourselves, not to mention the person standing next to us. How can technology understand us better than that?”

Case noted that the age of calm technology was referenced long ago at Xerox PARC, by early ubiquitous computing researchers, who paved the way for the Internet of Things (IoT). “What matters is not technology itself, but its relationship to us,” they wrote.

7 Axioms

She cited this quote from Xerox researcher Mark Weiser: “A good tool is an invisible tool. By invisible, we mean that the tool does not intrude on your consciousness; you focus on the task, not the tool.”

Case supplied some ordered axioms for developing calm technology:

  1.    Technology shouldn’t require all of our attention, just some of it, and only when necessary.
  2.    Technology should empower the periphery.
  3.    Technology should inform and calm.
  4.    Technology should amplify the best of technology and the best of humanity.
  5.    Technology can communicate, but it doesn’t need to speak.
  6.    Technology should consider social norms.
  7.    The right amount of technology is the minimum amount to solve the problem.

In summing up, Case said that calm technology allows people to “accomplish the same goal with the least amount of mental cost.” In addition to her presentation at the Embedded Linux Conference, Case also maintains a website on calm technology, which offers related papers, exercises and more.

Watch the complete presentation below: