diversity equity inclusion

This article originally appeared on the Open Mainframe Project’s blog. It is written by Earl Dixon, Principal Client Services Management at Broadcom

making our strong community stronger a collaborative initiative

After watching the first Making Our Strong Community Stronger panel on “How Personal Experiences Shape Corporate Inclusion,” I was very interested in the topic and engaged my management team to see what I could do to help in the effort. As a result, I was given the opportunity to participate in the second panel discussion focused on UNMASKING in the work place. I was very eager to participate as I felt the panel would be a great way for me to share my experiences.

As we started to discuss the structure and questions, I did get a little nervous.  I would be going from “not unmasking at work” to “unmasking” for my peers, management, and others in the industry.  We had a dry run for the panel, and I left that being even more nervous.  The other panelists (outside of my peers) were executives and managers who were white and had no issue with unmasking at work.  It was intimidating, but as I talked with Dr. Chance about my feelings, she made me feel more comfortable about moving forward.  As the days wound down closer to the event, I actually grew nervously excited.  Once that day came, I wanted to make sure that my story would be told the way that I needed it to be told. I wanted my story to be real and give an understanding of what it is like being a black man coming up in a white dominated field.

Much to my surprise, the panel went very well, and immediately after doing it, I felt a great sense of relief. It was as if a weight had been lifted off my shoulders.  The experience was very therapeutic for me.  The next day, the Making Our Strong Community Stronger initiative hosted a Town Hall for webinar attendees that had attended the panel discussion live and wanted to ask questions or provide feedback.  I felt even more confident in answering the questions posed by the audience, and it actually made me feel even better that I had been involved.

For the next few days, I received numerous emails, LinkedIn notes, and friend requests from individuals who applauded the webinar and the conversation we were able to have. I also heard stories from others who had similar experiences. Someone even asked me to discuss how my experiences could help him better understand how to support his diverse workforce.

In fact, I met with some of my own management team to discuss what they could be doing better from a DEI perspective. Having leaders from my company ask me questions and listen to what I had to say gave me a sense of appreciation that what we did in our panel was not only being heard, but real action was also occurring to help others coming into this mainframe space to work.

Overall, I am proud of the fact that I was able to participate in the DEI panel and look forward to doing more to help with DEI in the future.  It was a pleasure to work with Dr. Chance and her team in this effort to bring awareness to the truly important DEI issues that go unnoticed in the industry.

brian behlendorf testifying at a U.S. House hearing

This post originally appeared on OpenSSF’s blog

On Wednesday, May 11, 2022, Brian Behlendorf, OpenSSF General Manager, testified to the United States House of Representatives Committee on Science, Space, and Technology. Brian’s testimony shares the work being done within the Open Source Security Foundation and broader open source software community to improve security and trustworthiness of open source software.

A copy of Brian’s written remarks are below and linked here (PDF). Visit the Committee’s website to view a recording of the hearing.

Also testifying at the hearing were:

May 9th, 2022 

The Honorable Eddie Bernice Johnson, Chairwoman
The Honorable Frank Lucas, Ranking Member
Committee on Science, Space, and Technology
2321 Rayburn House Office Building
Washington, DC 20515-6301 

Dear Chairwoman Johnson, Congressman Lucas, and distinguished members of the Committee on Science, Space and Technology, 

Thank you for your invitation to address you today, and the opportunity to share with you the work being done within the Open Source Security Foundation and the broader open source software community to raise the level of security and trustworthiness of open source software. 

  1. What are the consequences of insecure open-source software and what is industry as a whole, and the Open Source Security Foundation in particular, doing to tackle such Vulnerabilities? 

Open source software (“OSS”) has become an integral part of the technology landscape, as inseparable from the digital machinery of modern society as bridges and highways are from the physical equivalent. According to one report, typically 70% to 90% of a modern application “stack” consists of pre-existing OSS, from the operating system to the cloud container to the cryptography and networking functions, sometimes up to the very application running your enterprise or website. Thanks to copyright licenses that encourage no-charge re-use, remixing, and redistribution, OSS encourages even the most dogged of competitors to work together to address common challenges, saving money by avoiding duplication of effort, moving faster to innovate upon new ideas and adopt emerging standards. 

However, this ubiquity and flexibility can come at a price. While OSS generally has an excellent reputation for security, the developer communities behind those works can vary significantly in their application of development practices and techniques that can reduce the risk of a defect in the code, or in responding quickly and safely when one is discovered by others. Often, developers trying to decide what OSS to use have difficulty determining which ones are more likely to be secure than others based on objective criteria. Enterprises often don’t have a well-managed inventory of the software assets they use, with enough granular detail, to know when or if they’re vulnerable to known defects, and when or how to upgrade. Even those enterprises who may be willing to invest in increasing the security of the OSS they use often don’t know where to make those investments, nor their urgency relative to other priorities. 

There are commercial solutions to some of these problems. There are vendors like Gitlab or Red Hat who sell support services for specific open source software, or even entire aggregate distributions of OSS. There are other vendors, like Snyk and Sonatype, who sell tools to help enterprises track their use of OSS and flash an alert when there is a new critical vulnerability in software running deep inside an enterprise’s IT infrastructure.

However, fighting security issues at their upstream source – trying to catch them earlier in the development process, or even reduce the chances of their occurrence at all – remains a critical need. We are also seeing new kinds of attacks that focus less on vulnerabilities in code, and more on the supply chain itself – from rogue software that uses “typosquatting” on package names to insert itself unexpectedly into a developer’s dependency tree, to attacks on software build and distribution services, to developers turning their one-person projects into “protest-ware” with likely unintended consequences. 

To address the urgent need for better security practices, tools, and techniques in the open source software ecosystem, a collection of organizations with deep investments into the OSS ecosystem came together in 2020 to form the Open Source Security Foundation, and chose to house that effort at the Linux Foundation. This public effort has grown to hundreds of active participants across dozens of different public initiatives housed under 7 working groups, with funding and partnership from over 75 different organizations, and reaching millions of OSS developers. 

The OpenSSF’s seven working groups are: 

  1. Best Practices for Open Source Developers: This group works to provide open source developers with best practices recommendations, and easy ways to learn and apply them. Among other things, this group has developed courseware for teaching developers the fundamentals of secure software development, and implement the OpenSSF Best Practices Badge program. 
  2. Securing Critical Projects: This group exists to identify and help to allocate resources to secure the critical open source projects we all depend on. Among other things, this has led to a collaboration with Harvard Business School to develop a list of the most critical projects. 
  3. Supply Chain Integrity: This group is helping people understand and make decisions on the provenance of the code they maintain, produce and use. Among other things, this group has developed a specification and software called “SLSA”, for describing and tracking levels of confidence in a software supply chain. 
  4. Securing Software Repositories: This group provides a collaborative environment for aligning on the introduction of new tools and technologies to strengthen and secure software repositories, which are key points of leverage for security practices and the promotion to developers of more trustworthy software. 
  5. Identifying Security Threats in Open Source Projects: This group enables informed confidence in the security of OSS by collecting, curating, and communicating relevant metrics and metadata. For example, it is developing a database of all known security reviews of OSS. 
  6. Security Tooling: This group’s mission is to provide the best security tools for open source developers and make them universally accessible. Among other activities, this group has released code to better enable a security testing technique called “fuzzing” among open source projects. 
  7. Vulnerability Disclosures: This group is improving the overall security of the OSS ecosystem by helping advance vulnerability reporting and communication. For example, this group has produced a Guide to Coordinated Vulnerability Disclosure for OSS

There are also a series of special projects under the OpenSSF worthy of special mention: 

  • Project sigstore: an easy-to-use toolkit and service for signing software artifacts, ensuring that the software you are holding is the same as what the developer intended, addressing a wide array of supply chain attacks. 
  • The Alpha-Omega Project: an effort to systematically search for new vulnerabilities in open source code, and work with critical open source projects to improve their vulnerability handling and other security practices. 
  • The GNU Toolchain Initiative: this effort supports the build ecosystems for perhaps the most critical set of developer libraries and compilers in the world, the GNU Toolchain, as a means to ensure its safety and integrity. 

All the above efforts are public-facing and developed using the best practices of open source software communities. Funding from our corporate partners goes towards supporting the core staff and functions that enable this community, but all the substance comes from voluntary efforts. In some cases funds flow to assist with specific efforts – for example, recently the Alpha-Omega project decided to allocate funding towards the NodeJS community to augment its security team with a part-time paid employee and to fund fixes for security issues. 

The Linux Foundation has also begun to adapt its “LFX” platform, a set of services designed to support the open source communities hosted by the Foundation, to incorporate security-related data such as vulnerability scans from Snyk and BluBracket, along with information from the OpenSSF Best Practices Badge program and the OpenSSF Security Scorecards initiative, to provide a unified view of the security risks in a particular collection of open source code, and what maintainers and contributors to those projects can do to improve those scores and reduce those risks. We expect to see more kinds of risk-related data coming into a unified view like this, helping developers and enterprises make better decisions about what open source components and frameworks to use, and how to reduce risk for those components they depend upon. 

Guiding all of this is a deep conviction among the OpenSSF community that while there are many different ways in which security issues manifest themselves in the OSS ecosystem, every one of them is addressable, and that there are lots of opportunities for investment and collective action that will pay a return many times over in the form of lower risk of a future major vulnerability in a widely-used package, and lesser disruption if one is discovered. 

Other efforts at the Linux Foundation include “Prossimo”, an effort focused on moving core Internet-related services to “memory-safe” languages like Rust, Go, or Java, which would eliminate an entire category of vulnerabilities that other languages allow too easily. Another is the SPDX standard for Software Bill of Materials (“SBOMs”), addressing the needs identified by White House Executive Order 14028 in a vendor-neutral and open way. 

This is by no means a comprehensive list of all such efforts in the OSS ecosystem to improve security. Every OSS foundation either has a security team in operation today or is scrambling to identify volunteers and funding to establish one. There is a greater emphasis today than I’ve seen in my 30 years of using and contributing to OSS (since before it was called OSS) on the importance of such efforts. Clear metrics for progress are elusive since we lack clear metrics for evaluating software risk; in fact developing ways to measure and represent that risk is a key priority for OpenSSF. We will never see a time when open source software is free from security defects, but we are getting better at determining the tools and techniques required to more comprehensively address the risk of vulnerabilities in open source code. Scaling up those tools and techniques to address the tens of thousands of widely used OSS components and to get them more quickly updated remains a challenge. 

  1. How can the Federal government improve collaboration with industry to help secure open-source software? 

I’ll focus here on principles and methods for collaboration that will lead to more secure OSS, and then for question 3 on specific opportunities to collaborate on. 

First, focus on resourcing long-term personal engagements with open source projects. 

Over the last few years, we have seen a healthy degree of engagement by the Federal government with OSS projects and stakeholders on the topic of improving security. The push established by Executive Order 14028 for the adoption of SBOMs aligned nicely with the standardization and growing adoption of the SPDX standard by a number of OSS projects, but it was aided substantially by the involvement of personnel from NIST, CISA, and other agencies engaging directly with SPDX community members. 

Often the real secret to a successful OSS effort is in the communities of different stakeholders that come together to create it – the software or specification is often just a useful byproduct. The Federal government, both through its massive use of open source code and the role that it traditionally performs in delivering and protecting critical infrastructure, should consider itself a stakeholder, and like other stakeholders prioritize engagement with upstream open source projects of all sizes. That engagement need not be so formal; most contributors to open source projects have no formal agreement covering that work aside from a grant of intellectual property in those contributions. But as they say, “history is made by those who show up.” If the IT staff of a Federal agency (or of a contractor under a Federal contract) were authorized and directed to contribute to the security team of a critical open source project, or to addressing known or potential security issues in important code, or to participating in an OpenSSF working group or project, that would almost certainly lead to identifying and prioritizing work that would result in enhanced security in the Federal government’s own use of open source code, and likely to upstream improvements that make OSS more secure for everyone else. 

Second, engage in OSS development and security work as a form of global capacity building, and in doing so, in global stability and resilience. OSS development is inherently international and has been since its earliest days. Our adversaries and global competitors use the same OSS that we do, by and large. When our operating systems, cloud containers, networking stacks and applications are made to be more secure, there are fewer chances for rogue actors to cause disruption, and that can make it harder to de-escalate tensions or protect the safety of innocent parties. Government agencies in France, Taiwan, and more have begun to establish funded offices focused on the adoption, development, and promotion of OSS, in many ways echoing the Open Source Program Offices being set up by companies like Home Depot and Walmart or intergovernmental agencies like the WHO. The State Department in recent years has funded the development of software like Tor to support the security needs of human rights workers and global activists. The Federal government could use its convening authority and statecraft to bring like-minded activities and investment together in a coordinated way more effectively than any of us in the private sector can. 

Third, many of the ideas for improving the security of OSS involve establishing services – services for issuing keys to developers like Project sigstore does, or services for addressing the naming of software packages for SBOMs, or services for collecting security reviews, or providing a comprehensive view of the risk of open source packages. Wherever possible, the Federal government should avoid establishing such services themselves when suitable instances of such services are being built by the OSS community. Instead of owning or operating such services directly, the Federal Government should provide grants or other resources to operators of such services as any major stakeholder would. Along similar lines, should the Federal government fund activities like third party audits of an open source project, or fund fixes or improvements, it should ensure not only that such efforts don’t duplicate work already being done, it should ensure that the results of that work are shared (with a minimum of delay) publicly and upstream so that everyone can benefit from that investment. 

These three approaches to collaboration would have an outsized impact on any of the specific efforts that the Federal government could undertake. 

  1. Where should Congress or the Administration focus efforts to best support and secure the open-sourced software ecosystem as a whole? 

The private sector and the Federal government have a common cause in seeing broad improvements in the security of OSS. I’m happy to share where I see the private sector starting to invest in enhanced OSS security, in the hopes that this may inspire similar actions from others. 

  1. Education. Very few software developers ever receive a structured education in security fundamentals, and often must learn the hard way about how their work can be attacked. The OpenSSF’s Secure Software Fundamentals courses are well regarded and themselves licensed as open source software, which means educational institutions of all kinds could deliver the content. Enterprises could also start to require it of their own developers, especially those who touch or contribute to OSS. There must be other techniques for getting this content into more hands and certifications against it into more processes. 
  2. Metrics and benchmarks. There are plenty of efforts to determine what are suitably objective metrics for characterizing the risks of OSS packages. But running the cloud systems to perform that measurement across the top 100,000 or even 10,000 open source projects may cost more than what can be provided for free by a single company, or may be fragile if only provided by a single vendor. Collective efforts funded by major stakeholders are being planned-for now, and governments as a partner to that would not be turned away. 
  3. Digital signatures. There is a long history of U.S. Government standards for identity proofing, public key management, signature verification, and so on. These standards are very sophisticated, but in open source circles, often simplicity and support are more important. This is pulling the open source ecosystem towards Project sigstore for the signing of software artifacts. We would encourage organizations of all sorts to look at sigstore and consider it for their OSS needs, even if it may not be suitable for all identity use cases. 
  4. Research and development investments into memory-safe languages. As detailed above, there are opportunities to eliminate whole categories of defects for critical infrastructure software by investing in alternatives written in memory-safe languages. This work is being done, but grants and investments can help accelerate that work. 
  5. Fund third-party code reviews for top open source projects. Most OSS projects, even the most critical ones, never receive the benefit of a formal review by a team of security experts trained to review code not only for small bugs that may lead to big compromises, but to look at architectural issues and even issues with the features offered by the software in the search for problems. Such audits vary tremendously in cost based on the complexity of the code, but an average for an average-sized code base would be $150K-250K. Covering the top 100 OSS projects with a review every other year, or even 200 every year, seems like a small price compared to the costs on US businesses to remedy or clean up after a breach caused by just one bug. 
  6. Invest into better supply chain security support in key build systems, package managers, and distribution sites. This is partly about seeing technologies like SBOMs, digital signatures, specifications like SLSA and others built into the most widely used dev tools so that they can be adopted and meaningfully used with a minimum of fuss. Any enterprise (including the Federal government) that has software certification processes based on the security attributes of software should consider how those tools could be enhanced with the above technologies, and automate many processes so that updates can be more frequent without sacrificing security. 

These activities, if done at sufficient scale, could dramatically lower the risks of future disruptive events like we have seen. As a portfolio of different investments and activities they are mutually reinforcing, and none of them in isolation is likely to have much of a positive impact. Further econometrics research could help quantify the specific reduction of risk from each activity. But I believe that each represents a very cost-effective target for enhancing security in OSS no matter who is writing the check. 

Thank you again for the opportunity to share these thoughts with you. I look forward to answering any questions you may have or providing you with further information. 


Brian Behlendorf
General Manager, Open Source Security Foundation
The Linux Foundation

in memory of shubhra kar

This past week, we lost our dear friend, colleague, and a true champion of the open source community. Our CTO, Shubhra Kar, passed away suddenly while he was with his entire LF family at our first in-person, all-hands gathering since before the pandemic. 

Those who had the honor to work with him will know, he was a special leader and a wonderful human being.  Above all, Shubhra was the kind of leader who quickly passed the credit for accomplishments to his team over himself. His humble spirit and ever-present smile was admired by all around him. He was so proud of the world class team he had built here, and did that in part with engineers who followed him from one organization to another throughout his career.

We also knew Shubhra as a selfless leader – one who was more interested in the work than the reward. At the same time, he was incredibly ambitious – wanting to build a platform that would not only transform The Linux Foundation but support open source development communities around the world.  This was the week his team unveiled significant new enhancements across the LFX platform. It was a project he led from vision to reality, after many – even members of his own team – had told him the path to success was impossible. He was a transformational leader that has left his legacy here.

While he was passionate about his work and his team, he loved his family even more. In fact, his children were often spotted behind him during video calls throughout the day. He was a fantastic husband and father, and we are so grateful for his wife, son, and daughter sharing him with us. 

Sharing Memories

Our thoughts and prayers remain with Shubhra’s family in this incredibly difficult time. If you would like to leave a memorial message for Shubhra, please submit a pull request on GitHub here. His family would love to hear from you and especially appreciates stories that are shared of his life and career.

Memorial Fund

The Linux Foundation has made arrangements with the family to establish Shubhra’s memorial fund that will provide support for his family and his children’s education.  Donations can be made to the family here.

shubhra kar and his wife and two kids

call for code 2022

I am always amazed at the impact we all have coming together, using our collective talents for good. Combining our collective brain power, skills, time, and resources produces stellar results – maybe it is better rendering management for films that entertain with mind-bending CGIs or improving automated software testing and deployment so developers can spend more time on innovation. Human ingenuity is amazing! 

Imagine our impact when we come together for good. When we see communities who need a collective leg up in life, or when we see injustice and foresee ways to balance the scale, or when we see the devastation in the wake of natural disasters and know there is a better way. We want to make the lives of everyone better – it might seem daunting, but innovation is bred from not knowing what you can’t do. 

Facilitating this drive to help is what the Call for Code® project is about. It is, “creating and deploying open source technologies to tackle some of the world’s greatest challenges.” It is about thinking beyond yourself – using your talents to help others. 

Call for Code was created by David Clark Cause with Founding Partner IBM and in partnership with United Nations Human Rights and The Linux Foundation. The goal is to inspire “developers to create practical, effective, and high-quality applications that can have an immediate and lasting impact on humanitarian issues as sustainable open source projects.” The Linux Foundation helps take the raw innovation and put in place the right tools to enable an impact across the world: instill best practices, engage external partners, provide feedback, and test them in the real world.

Call for Code 2022

The Call for Code 2022 is now open for registration. The focus this year is on sustainability. Do you have an idea to improve sustainable production, consumption, and management of resources, reduce pollution creation, and protect biodiversity? Keep reading. You don’t have a world-changing idea. Keep reading – you just might light a spark of ingenuity. 

For this year, specifically, your solution should address: carbon emissions; clean energy; supply chain transparency and traceability; water scarcity and quality; reducing waste footprints; biodiversity; food insecurity; and education access and job opportunities to further environmental justice. And, no, this isn’t just for software developers. Each well-rounded team needs builders, designers, communicators, and humanitarians.  

There is a total of $285,000 in prizes, all winners will receive open source support from The Linux Foundation, and all participants will receive a variety of support, such as IBM Cloud services, accelerators, expert webinars, mentors, and more.

Registration opened April 26, 2022 and final submissions are due October 31, 2022. Visit callforcode.org for detailed information and requirements and to register. 

Call for Code 2021 Winners

Do you still need some inspiration? Take a few minutes to read about the 2021 winners. Half of the projects focus on racial justice – and those are the ones I want to take a moment to highlight. If you see one that inspires you, click through to learn more and for ways you can contribute: 

Fair Change allows people to easily record public safety incidents in a safe and secure way with a goal of more transparency, reeducation, and reform. 

TakeTwo utilizes machine learning to highlight potentially racially insensitive language on websites you are browsing in Chrome. 

Legit-Info provides information on policy proposals at various levels of government. It communicates the potential impact without legalese and facilities sharing opinions with policy makers. It also gives policy makers visibility into how diverse citizens will be impacted.

Open Sentencing helps public defenders understand and document any racial disparities in the judicial system.

Five Fifths Voter helps remove impediments to voting by providing information on voter registration, voter ID laws, restrictions, purging, gerrymandering, and tools that make it easier to vote, such as childcare at the voting stations.

Incident Accuracy Reporting System enables victims and witnesses to contribute to incident reports to help give law enforcement and the public a 360-degree view of events that took place at any incident. It utilizes Hyperledger blockchain to ensure transparency, trust, and that information can’t be altered. 

Truth Loop is a mobile-friendly tool to see pending legislation, learn about it, record your own story related to the legislation and its impact, and share that with policy makers.

Call for Code also has seven other projects related to natural disasters and stemming the impact of climate change, including monitoring the real-time air health for wildland firefighters, democratizing earthquake monitoring, inspecting buildings, facilitating drone canvassing and delivery of supplies following a natural disaster, and helping farmers optimize water use. Finally – they have a project, Rend-o-Matic, that enables musicians to remotely record their individual track in a composition and stitches them all together into the final, virtual performance. 

Join a Call for Code Project

Let’s show the world the impossible is possible.

Call for Code is making a difference! Are you experiencing some FOMO? Want to join in? Good news – fear no more. You can! And you don’t even have to be a technical person. Besides the need for a wide range of technical specialists, the projects can also utilize individuals for documentation, testing, design, UI/UX, legal, subject matter experts, advocacy, and community building. Just head over to our Call for Code page and help work on these projects. 

Do you have another idea around sustainability?  Register for the Call for Code 2022 now and pull together your team.  

Let’s show the world the impossible is possible.

Demi Ajayi and Daniel Krook joined forces at a keynote session for the 2021 All Day DevOps conference to talk about the Call for Code, the 14 current projects, and how individuals can become involved with them. 

RIT campus view

This post originally appeared on Linux.com. The author, Stephen Jacobs, is the director of Open@RIT and serves on the Steering Committee of the TODO Group and served as a pre-board organizer of the O3DE Foundation. Open@RIT is an associate member of the Linux Foundation. 

What Is An Academic OSPO?

The academic space has begun to see activity around the idea of Open Source Program Offices at colleges and universities.  Like their industry counterparts, these offices lead or advise administrative efforts around policy, licensing compliance, and staff education.  But they can also be charged with efforts around student education, research policies and practices, and the faculty tenure and promotion process tied to research.

Johns Hopkins University (JHU) soft-launched their OSPO 2019, led by Sayeed Choudhury, Associate Dean for Research Data Management and Hodson Director of the Digital Research and Curation Center at the Sheridan Libraries in collaboration with Jacob Green with MOSS Labs. Other universities and academic institutions took notice.

Case Study: Open@RIT

I met Green at RIT’s booth at OSCON in the summer of 2019 and learned about JHU’s soft launch of their OSPO.  Our booth showcased RIT’s work with students in Free and Open Source humanitarian work. We began with a 2009 Honors seminar course in creating educational games for the One Laptop per Child program. That seminar was formalized into a regular course, Humanitarian Free and Open Source Software. (The syllabus for the course’s most recent offering can be found at this link)

By the end of 2010, we had a complete “Course-to-Co-Op lifecycle.” Students could get engaged in FOSS through an ecosystem that included FOSS events like hackathons and guest speaker visits, support for student projects, formal classes, or a co-op experience. In 2012, after I met with Chris Fabian, co-founder of UNICEF’s Office of Innovation, RIT sent FOSS students on Co-Op to Kosovo for UNICEF. We later formally branded the Co-Op program as LibreCorps. LibreCorps has worked with several FOSS projects since, including more work with UNICEF. In 2014 RIT announced what Cory Doctorow called a “Wee Degree in Free,” the first academic minor in Free and Open Source Software and Free Culture. 

All of these efforts provided an excellent base for an RIT Open Programs Office. (more on that missing “s” word in a moment) With the support of Dr. Ryne Raffaelle, RIT’s VP of Research, I wrote a “white paper” on how such an office might benefit RIT. RIT’s Provost, Dr. Ellen Granberg, suggested a university-wide meeting to gauge interest in the concept, and 50 people from 37 units across campus RSVP’d to the meeting. A subset of that group worked together (online, amid the early days of the pandemic) to develop a “wish list” document of what they’d like to see Open@RIT provide in terms of services and support. That effort informed the creation of the charter for Open@RIT approved by the Provost in the summer of 2020.

An Open Programs Office

Open@RIT is dedicated to fostering an “Open Across The University” as a collaborative engine for Faculty, Staff, and Students. Its goals are to discover and grow the footprint, of RIT’s impact on all things Open including, but not limited to, Open Source Software, Open Data, Open Science, Open Hardware, Open Educational Resources, and Creative Commons licensed efforts; what Open@RIT refers to in aggregate as “Open Work.” To highlight the wide constituency being served the choice was made to call it an Open Programs Office to avoid being misread as an effort focusing exclusively on software. The IEEE (which Open@RIT partners with), in their SA Open effort , made the same choice.

In academia, there’s growing momentum around Open Science efforts. Open Science (a term that gets used interchangeably with “Open Research” and “Open Scholarship”) refers to a process that keeps all aspects of scientific research, for the formation of a research plan onward, in the Open. This Scientific American Op-Ed (that mentions Open@RIT) points to the need for academia to become more Open. Open Educational Resources (I.E., making course content, texts, etc., Free and Open) is another academic effort that sees broad support and somewhat lesser adoption (for now).

While the academic community favors Open Science and Open Educational Resource practices, it’s been slow to adopt them. This recently released guide from the National Academies of Science, Engineering, and Mathematics, a bellwether organization, adds pressure to academia to make those changes.

What’s Open@RIT Done Since The Founding?

Drafting Policies and Best Practices Documents

Policy creation in academia is and should be slow and thoughtful.  Open@RIT’s draft policy on Open Work touches every part of the research done at the university.  It’s especially involved as it needs to cover three different classes of constituents.  Students own their IP at RIT (a rarity in academia) except when the university pays them for the work that they do (research assistance ships, work-study jobs, etc.), Staff (the University owns their IP in most cases), and Faculty. The last are a special case in that researchers and scientists are expected to publish their work but may need to work with the university to determine commercialization potential.  It also needs to address Software, hardware, data, etc.

Our current draft is making the rounds to the different constituencies and committees, and that process will be completed at some point in academic year 21-22.  In the meantime, parts of it will be published as Open@RIT’s best practices in our playbook, targeted for release before the end of Fall semester. Our recommendations for citing and supporting Open Work in Tenure and Promotion will also be part of the playbook and its creation is supported by the Alfred P. Sloan Foundation grant and by the LFX Mentorship program.

Faculty and Staff Professional Development

In October of 2020, The Alfred P. Sloan Foundation funded a proposal by Open@RIT funding some general efforts of the unit and, in particular, a LibreCorps team to support what we’re now calling the Open@RIT Fellows Program. We’re charged with supporting 30 faculty projects over two years and already have twenty-one that have registered, with about one-third of those project support requests completed or in progress. In many ways, the Open@RIT Fellows program could be considered an “Inner Source” effort.

This Zotero curated collection of articles, journal papers, book chapters, and videos on various aspects of Open Work and Open scholarship is the first step in our professional development efforts. It includes links to drafts of our recommendations around releasing Open Work and on building your evaluation, tenure and promotion cases with Open Work. We hope to offer professional development-related workshops in late fall or early spring of the coming AY.

Student Education

Open@RIT is wrapping up our “Open Across the Curriculum” efforts.  While we’ve had several courses and a minor in place, they mostly were for juniors and seniors.  Those classes were modified to begin accepting sophomores, and some new pieces are being brought into play.

At RIT, students are required to take an “Immersion,” a collection of three courses, primarily from liberal arts, designed to broaden students’ education and experiences outside of their majors. The Free Culture and Free and Open Source Computing Immersion does just that and opens to students this fall.

Within the month, Open@RIT will distribute a set of lecture materials to all departments for opt-in use in their freshman seminars that discuss what it means for students to own their IP in general and, specifically, what Opening that IP can mean in science, technology, and the arts.

Once the last pieces fall into place, students will be able to learn about Open as Freshmen, take one or both of our foundational FOSS courses Humanitarian Free and Open Source Software and Free and Open Source Culture as Sophomores and then go on to the Immersion (three courses) or the Minor (five courses) should they so choose.

Advisory Board and Industry Service

Open@RIT meets three times/year with our advisory board, consisting of our alums and several Open Source Office members from Industry and related NGOs.

Open@RIT is active in FOSS efforts and organizations that include IEEE SA Open, Sustain Open Source’s Academic and Specialized Projects Working Group and CHAOSS Community’s Value working group.

Next Steps

By the end of 2022, Open@RIT will complete all of the points in its charter, hold a campus conference to highlight Open Work being done across the university, and complete a sustainability plan to ensure its future.

API Project Lura

APIs (Application Programming Interfaces) provide exponential growth opportunities for what the web and its data and applications can do for us. Since APIs allow for sharing of data between applications, doors open to what is possible as the strengths of disparate systems are combined into a new one. 

While we live in an API-driven world, it can be difficult and burdensome to connect and maintain systems via an API. Reducing those barriers opens even more doors and lets people like me, who have more ideas than skills, try things out. Enter API gateways to help ease the burden. 

But not all API gateways are created equal. The Lura Project, formerly the KrakenD open source project, is a framework for building API gateways that goes beyond simple reverse proxy, functioning as a stateless, distributed, high-performance aggregator for many microservices. It is also a declarative tool (tell it what you need rather than how to do it) for creating endpoints. Albert Lombarte, the executive director of The Lura Project and the CEO of KrakenD, elaborates, “An API gateway framework is a tool that is between the clients, the consumers of an API, and the backend services, which actually have the data that the users want to consume. So an API gateway is a product that makes possible things like security, where rate-limiting, authorization, load balancing, all of that happens without needing to implement that in the backend part.”

KrakenD was created six years ago as a library for engineers to create fast and reliable API gateways and has since been in production among some of the world’s largest Internet businesses. In order to keep up with the demand from the community, in 2021 KrakenD decided to host the project at The Linux Foundation. Lombarte said, “By being hosted at the Linux Foundation, the Lura Project will extend the legacy of the KrakenD open source framework and be better poised to support its massive adoption among more than one million servers every month. The Foundation’s open governance model will accelerate development and community support for this amazing success.”

To learn more about the project, watch Albert’s interview with Swapnil Bhartiya of TFiR and go to the project’s website. Then, join the community. You can help create better tools so we can utilize APIs for even more than we can imagine today. 

The Future of Banking is Open - Kris Sharma

This article is written by Kris Sharma, Financial Services Sector Lead, Canonical and originally appeared on the FINOS blog

The banking sector is facing rapid and irreversible changes across technology, customer behaviour, and regulation. While customers are demanding ever higher levels of service and value and regulations are impacting business models and economics, technology can be a potent enabler of both customer experience and effective operations.

The banking industry will look radically different in the near future as new banking models will bring a lot of product and service innovation. There is a new wave of digital-only banks across the globe challenging traditional banking players. The digital-only banks are tightening the competitive landscape and the competition would create the impetus for banks to do more with technology and provide better customer services. In this quickly shifting landscape, financial institutions of all shapes and sizes need to find every possible way to respond and compete. This is where technology and innovation matters – having an open and flexible technology architecture driving business agility.

Open source technologies and open innovation have the potential to level the playing field and accelerate the pace of digital business transformation enabling financial institutions to get products and services to market faster and help solve the challenges facing the financial services industry.

Open Source is Everywhere

A recent report by The Linux Foundation and The Lab for Innovation Science at Harvard highlights that open source constitutes 80% of any given piece of modern software. In the last few years, financial institutions have already been leveraging open source across a broad spectrum, from its use in back-end technologies to regulator mandated Open Banking in the UK and PSD2 in Europe. The massive compute landscape, data storage and processing capabilities of financial institutions, and the trading infrastructure is largely run on open source Linux platforms. By plugging-in open source technology solutions, financial institutions are able to free up valuable resources to focus efforts on integration and create business value.

Open Source Drives Innovation and Delivers Business Value

The real draw of open source in financial services is the ability to explore and innovate with new technologies, to easily scale the solutions that deliver real competitive advantage and to reduce the overall cost of managing vast IT infrastructures through the use of common, best-of-breed open source technologies.

Open source platforms can be likened to working or playing with building blocks because developers are uninhibited by design constraints – they are free to innovate and develop new business value and differentiation for enterprise applications. The flexibility and adaptability is unmatched by any proprietary platform.

Open source often provides the foundational technology, including languages, libraries, and database technologies that lays a rich foundation to quickly develop enterprise applications. Financial institutions can maintain cost-effectiveness while tapping into the expertise of the open-source user community.  Open source communities fuel the developer velocity and developers have a lot of access to tools through APIs and services.

Financial institutions are under pressure to increase business flexibility and the velocity of innovation with the same or fewer resources. Open source technologies are paving the way for financial services software development towards a future in which service offerings and applications can be rapidly constructed by assembling and integrating a wide variety of technical building blocks. By adding additional proprietary capabilities and functionality, banks can differentiate their offerings and drive consumer benefits.

FINOS and the Future of Open

The Fintech Open Source Foundation, which includes members and contributions from the financial services industry, develops open source software, standards and special interest groups whilst providing an independent setting to deliver solutions that address common banking challenges and drive innovation within the regulated industry.

Banks, fintechs and technology companies, at the forefront of the financial services industry and engineering in banking, are making long-term commitments to open source by collaborating within the foundation as FINOS members and uniting with a shared goal of “shaping the future of open source in financial services.”

Open source projects that have been contributed to FINOS by foundation member banks include Legend by Goldman Sachs, Morphir by Morgan Stanley, Perspective by JPMorgan Chase, and Waltz by Deutsche Bank. FINOS open source projects can be used directly from the FINOS GitHub Organisation and solve real world banking problems ranging from financial objects modeling through Legend to the mapping out internal banking systems through Waltz.

kris sharma

About the Author

Srikrishna ‘Kris’ Sharma is the Financial Services Sector Lead at Canonical. Over the last two decades, Kris has held various leadership positions at management consulting firms providing advisory services to Fortune 100 and FTSE 100 clients. As a trusted C-level advisor and Business- Technology Leader, Kris partners with organisations across industry sectors on open source and business transformation strategies and builds innovative solutions by leveraging open source. Kris focuses on creating strong ecosystem partnerships and sees himself as a change agent with a passion for transformation, open source product strategy and innovation.

LF Public Health Global COVID Certificate Network

This article originally appeared on the LF Public Health project’s blog. We republished it here to help spread the word about another impactful project made possible through open source. 

Linux Foundation Public Health (LFPH) launched the Global COVID Certificate Network (GCCN) project in June 2021 to facilitate the safe and free movement of individuals globally during the COVID pandemic. After nine months of dedicated work, LFPH completed the proof-of-concept (POC) of the GCCN Trust Registry Network in partnership with Fraunhofer Institute for Industrial Engineering (Fraunhofer IAO)Symsoft Solutions and Finema in March 2022.

With the ambition to provide a complete suite of technology to address the many challenges for COVID certificates, such as interoperability, data security and privacy protection, LFPH began the GCCN project focusing on one of the challenges not being addressed—a global trust architecture that allows seamless integration of the disparate COVID credential types. At the time, many small and large centralized trust ecosystems that implemented different technical standards and policies, such as the EU Digital COVID Certificate, emerged and began to gain traction. However, without a platform that allows these ecosystems to discover and establish trust with each other, there wouldn’t be interoperability at the global level. The GCCN Trust Registry Network was created to solve exactly this problem.

“We started the GCCN work in response to COVID, but everything we do has a vision for solving the challenge of people needing multiple credentials and constant verifications. The GCCN Trust Registry Network makes possible a new, decentralized way of trust management, which helps revolutionize how identities are shared in a privacy-preserving way. At LFPH, we are dedicated to open source innovation for public health and patient identity. We look forward to working with our members, community and stakeholders to advance the GCCN work both in the US and internationally.” – Jim St.Clair, Executive Director of LFPH

Building on the open source TRAIN Trust Management Infrastructure funded by the European Self-Sovereign Identity Framework (ESSIF) Lab, the GCCN Trust Registry Network allows different COVID certificate ecosystems, which can be a political and economic union (e.g. the EU), a nation state (e.g. India), a jurisdiction (e.g. the State of California), an industry organization (e.g. ICAO) or a company (e.g. a COVID test administrator), to join and find each other on a multi-stakeholder network, and validate each other’s COVID certificate policies. This interaction is known as a discovery mechanism. Then based on the discovery, verifiers will decide whose certificates they accept and use the Trust Registry Network to build a customized trust list based on their entry rules and check the source of incoming certificates against their known list to determine if it’s from a trusted source. If the certificate is from a trusted source, the verifiers will be able to use the public key to decrypt and decode a COVID certificate. For more information about the technical mechanism behind the GCCN Trust Registry Network and how it works, please see our two recent articles, “How does a border control officer know if a COVID certificate is valid?” and “How does a border control officer know if a traveler meets entry rules?”.

global covid certificate network trust registry network diagram

The GCCN Trust Registry Network PoC is composed of two parts, onboarding to the Network and verification of COVID certificates using the Network. The PoC wouldn’t have been a success without the contributions of these partners and the ongoing support of the LFPH community. Fraunhofer IAO, the German research organization that developed the TRAIN Infrastructure, supported the effort throughout. Symsoft Solutions, a US-based enterprise web solutions provider, built the initial demo web application of the Network and web interface for the onboarding process of the POC. Savita Farooqui, the founder of Symsoft Solutions, has been co-leading the design and technical development of GCCN with LFPH staff. Finema, a Thai company specializing in decentralized identity solutions, developed the verifier app for the POC that demonstrates how a verifier can leverage the Network for verifications.

“By working with the LFPH team on the GCCN Trust Registry Network initiative, we had the opportunity to explore and extend the TRAIN Infrastructure for COVID certificate trust management. Prior to this work, TRAIN was already implemented for a variety of use cases such as IoT/Industry 4.0, verification of refugee educational documents. We believe that TRAIN will be able to provide lightweight solutions pertaining to trust management on a global scale for a wide range of public health scenarios. We are looking forward to working on the further developments of the GCCN Trust Registry Network based on the stakeholders’ needs for COVID and beyond.” – Isaac Henderson, Technical Architect, Fraunhofer IAO.

The GCCN Trust Registry Network provides a model for managing global, distributed trust registries/authorities. The Network enrolls trust registries/authorities as entries and supports the structure and meta-data for a variety of trust registries, along with a mechanism to access and update the entries using machine and human accessible formats. We worked with the LFPH team to define the meta-data and workflows for enrollment, and developed the demo application to validate these requirements and the POC interface to integrate with the TRAIN infrastructure. We look forward to continuing to work with LFPH and other partners to further develop the GCCN Trust Registry Network and create a reusable trust management solution for use cases beyond COVID. – Savita Farooqui, Founder, Symsoft Solutions

Finema’s solution plays a big part in the verification of different digital vaccine credentials for the Thailand Pass portal that has been a major factor in reopening Thailand’s borders and encouraging global travel. Through that work, we saw and experienced a clear need for a highly secure global trust network that promotes greater interconnectivity and interoperability between various COVID vaccination credentials from different nations, organizations and individuals throughout the world. Finema was happy to support the POC development of the GCCN Trust Registry Network through our solutions, and we look forward to building further on this work for border reopening and other use cases.  – Pakorn Leesakul, CEO, Finema Co. Ltd.

LFPH will host two webinars about the POC: on May 10, 2022 at 8 am ET / 2 pm CEST, and May 11, 2022 at 7 pm PT / (+1d) 10 am HKT, to have a live demo and Q&A session.

In the meantime, if you have any questions about the GCCN Trust Registry Network and the POC, please email the LFPH team at info@lfph.io.

happy first birthday LF research

When I started at The Linux Foundation (LF) a few weeks ago, our research was one of the first things I dug into as I absorbed and learned what all the LF does to advance open source. Plus, since I started, it seems like the LF Research team has published a new report every few days. What a wealth of information!

So, imagine my surprise when I learned that LF Research has just been around for one year. April 15th marked their one year birthday – and they have set the bar high in their first year. 

But are they making a difference? I know my inclination, especially having spent time working in government, is that research reports get published and then sit on virtual shelves, never to be seen again. But LF Research uses the open source model of bringing people together to solve problems and to share the solutions widely. They engage LF members and the community, across the ecosystem, to answer the question, what are the tools we can create, together, for shared value. And, importantly, their reports focus on action items.

Over the past twelve months, LF Research has published 12 reports across a variety of topics and industry verticals. Each of them are presented below. Take time to look at their work, dig in deeper on topics that interest you, and then go, make a difference. 

And  stay tuned for more impactful research in 2022 on topics such as cybersecurity insights in the developer process, mentorship, a guide to enterprise open source, an updated state of the open source program office, a new jobs report, and much, much more.

The Carbon Footprint of NFTs – NFTs are simultaneously overhyped and met with both skepticism and a general lack of understanding on what they are and how they work. Serious concerns have also been raised over energy-intensive proof-of-work (PoW) consensus mechanisms. The report, just released last week, studies the concern that energy-intensive PoW consensus mechanisms for NFTs have a significant impact on the climate. The report details the changes taking place in the blockchain industry to address this issue, and describes howNFTs can have varying carbon footprints depending on their underlying technology stacks. Read it to learn how we can make a difference now.

Open Source in AI report cover

AI and Data in Open Source – The report reviews critical challenges in the open source AI ecosystem, such as the talent shortage, the trust gap for AI-enabled products, implementing and verifying trusted and responsible AI systems and processes, and more. But, with challenges are opportunities – opportunities that could change the world. Imagine how marrying AI with edge computing enhances performance and real-time decision making, or how CDLA licenses enable wider sharing and use of open data and the innovation that sparks in AI and machine learning models. The report also reviews how the LF AI & Data Foundation is empowering innovators and accelerating open source development. Read the full report and get excited!

report: artificial intelligence and data in open source

Paving the Way to Battle Climate Change: How Two Utilities Embraced Open Source to Speed Modernization of the Electric Grid – New technology has to be easy to use and workable to be adopted widely enough to make a difference – this holds true in electricity production. As the energy sector innovates to do its part to arrest climate change, it must find solutions to ease the adoption of new energy sources. As the electricity infrastructure modernizes, electricity is provided into the grid from a variety of sources – homes, business, wind and solar farms, etc. – rather than just from the local power plant. It goes from TSOs (main power lines) to DSOs (the “last mile” so to speak). Netherlands’ Alliander, a DSO, and France’s RTE, a TSO, contributed to three LF Energy projects (SEAPATH, CoMPAS, and OpenSTEF) so their electrical substations will become more modular, interoperable, and scalable. This report digs into the case studies to show how working together via open source enables them to develop more software solutions up to ten times faster than working on their own proprietary solutions.

paying the way to battle climate change with open source

Open Source in Entertainment: How the Academy Software Foundation Creates Shared Value – Truth be told, when I try to explain open source software and what we foster at the LF among my friends and family, I use the Academy Software Foundation as an example. I mean, let’s be honest, movies are way more interesting and relatable than software supply chains or licensing. The ASWF also serves as a stellar example of why companies would want to join forces and collaborate on a common software solution – let’s share resources to make the foundational tools together and then innovate on top of that on our own. We can all grow together by raising the foundation we start at. This report is a story about industry competitors, who, by working together, have shared and developed the technologies used to create mesmerizing visual effects for professional studios and filmmaking enthusiasts alike. It should spark open source innovation in other industries too (see FINOS below). 

open source in entertainment report

Census II of Free and Open Source Software – Application Libraries – There are more software vulnerabilities out there than there are resources available to fix them, so knowing which ones are more widely utilized and which ones are used in more critical instances allows for better resource prioritization. Makes sense, right? This report builds on the Census I report, which focused on the lower level critical operating system libraries and utilities. It utilizes data from partner Software Composition Analysis (SCA) companies including Synk, the Synopsys Cybersecurity Research Center (CyRC), and FOSSA.  They looked at over half a million observations of Free and Open Source Software libraries used in production applications at thousands of companies.  See the data and read the report written by and see the data here

harvard census ii report image

The Evolution of the Open Source Program Office – The TODO Group is an LF project community to help organizations run successful and effective open source program offices or similar open source initiatives. This report was produced in partnership with them to provide rich insight, direction, and tools to implement an OSPO or an open source initiative with corporate, academic, or public sector environments. It also has case studies from Bloomberg, Comcast, and Porsche – the last of which was especially cool for the car geek in me. Check it out here

the evolution of the open source program office

The State of the Software Bill of Materials (SBOM) and Cybersecurity Readiness – An SBOM is a formal and machine-readable metadata that uniquely identifies a software package and its contents. It allows organizations to quickly and accurately determine which software applications and libraries are used and where so they can effectively address vulnerabilities. The report offers fresh insight into the state of SBOM readiness and helps organizations looking to better understand SBOMs as an important tool in securing software supply chains. They need to be adopted now – so go read the report.

Diversity, Equity, and Inclusion in Open Source – Diversity, equity, and inclusion (DEI) in the technology industry — and within open source specifically—is an opportunity we need to continuously leverage for the benefits it brings. In addition to the survey findings on the state of DEI, this research explores a number of DEI initiatives and their efficacy and recommends action items for the entire stakeholder ecosystem to further their efforts and build inclusion by design. Access the report here.

DEI in open source report cover

Data and Storage Trends Report – The SODA Foundation is an open source project under the Linux Foundation that fosters an ecosystem of open source data management and storage software for data autonomy. The report is based on a survey in English, Chinese, and Japanese-speaking markets to identify the current challenges, gaps, and trends for data and storage in the era of Cloud Native, edge, AI, and 5G. The intention is to use this survey data to help guide the SODA Foundation and its surrounding ecosystem on important issues and help its members be better equipped to make decisions, improve their products, and the SODA Foundation to establish new technical directions.

2021 data and storage trends report

The State of Open Source in Financial Services Report – While the financial services industry has been a long-time consumer of open source software, contributing to software and standards development has not been at the core of their business models and tech strategies. This report creates a baseline of their current activities, highlights obstacles and challenges to improving industry-wide collaboration, and lays out a set of actionable insights for improving the state of open source in financial services. You can read the report here

FINOS survey

9th Annual Open Source Jobs Report – ​​ The LF partnered with edX to shed light on the changes and challenges in the global open source jobs market. Employers can use its actionable insights to inform their hiring, training, and diversity awareness efforts. It also gives professionals clear, unbiased insights on which skills are most marketable and how reskilling and certifications benefit job seekers. Dig in here

2021 open source jobs report

Hyperledger Brand Study – The study explores the state of the enterprise blockchain market and the Hyperledger brand. It looks at whether enterprises have or are considering adopting blockchain, which solutions they are familiar with, what are desirable attributes of solutions, what problems they are addressing with blockchain technology, and much, much more. You can read the results and access the underlying data here

enterprise blockchain and the hyperledger brand

LF Research NFT report

Non-Fungible Tokens (NFTs) are an invention unique in human history whose role is fast extending beyond the speculative trends around collectibles to use cases that have a positive social impact. 

Through NFTs, a broad range of physical and virtual assets can be authenticated, providing transparency on ownership and underlying attributes of tokenized assets while preserving the privacy of individual owners. The cryptographic guarantees of NFTs make them well suited for use cases such as anti-counterfeiting, provenance tracking, and title transfer.

However, due to the high level of computational power required to mint an NFT on Proof of Work (PoW) blockchains, and the energy required to achieve the necessary computational power — which is primarily supplied by non-renewable fuel sources — the emissions from minting, transferring, and burning NFTs can be quite high

It’s estimated that the mining activities associated with cryptocurrencies emit as much as 114.06 megatons of CO2 per year, equivalent to the same amount produced by the entire Czech Republic.

Most of this effect is caused by electricity usage, as blockchain networks are frequently energy-intensive due to their PoW consensus mechanisms. Based on current patterns, blockchain technology will account for 1% of global electricity consumption by 2025. However, not all digital assets qualify as energy-intensive.

In a new study, Linux Foundation Research and Hyperledger Foundation collaborated with Palm NFT Studio to conduct a study on the design architecture of NFTs and how they may have varying carbon footprints depending on their underlying technology stacks. In essence, not all blockchains are equally hazardous to the environment.

The report also provides recommendations for how NFT creators can reduce the environmental impact of their work, such as by using an alternative consensus mechanism that is not carbon-intensive. Those mechanisms need to be robust enough to:

  • Reduce blockchain’s carbon footprint
  • Protect against coordinated blockchain attacks by increasingly consolidated mining computing power
  • Overcome blockchain scaling challenges, which are limited by both slow finality times and low volumes of transactions per second (on Ethereum and many other blockchains)

One such alternative in use today is the Proof of Stake (PoS) consensus mechanism, which is less computationally intensive than PoW, among others. Rather than calculating to solve computational issues, in a PoS system, those in control of the blockchain’s upkeep stake (i.e., “pledge”) their currency, putting it in a type of escrow as a guarantee against fraud. If everything goes well, those who stake their tokens may earn a little profit through a share in block rewards. 

While we believe that a move to more environmentally-friendly NFTs by using alternative consensus mechanisms is an essential first step, it is not the only one needed to make the industry more sustainable. Sustainable practices for NFTs (and for the blockchain industry as a whole) start with reduction. Using renewable energy sources, such as solar and wind, can further reduce blockchain emissions. 

Beyond choosing sustainable blockchain architecture for issuing NFTs, carbon offsets are an important add-on to the sustainability equation. Offset projects can include a wide range of activities, from planting new forests to capturing methane gas from landfills. 

Measured, verified, and certified offsets allow a price to be placed on more carbon-intensive activities providing companies and businesses with a way to incorporate these into their budgets. While embracing offset projects can lead to greenwashing claims, it’s important to choose certified initiatives in tandem with other efforts.

NFTs are here to stay, so now is the time for the industry to reduce its carbon footprint and become more sustainable by leveraging existing technologies and carbon offset opportunities. We hope this report serves as a starting point to inform such decisions.