“Driven by its founder and chief executive, Mark Zuckerberg, Meta believes that the smartest thing to do is to share its underlying A.I. engines as a way to spread its influence and ultimately move faster toward the future,” said the article. Meta’s chief AI scientist, NYU professor Yann LeCun noted in a recent interview that the growing secrecy of Google and OpenAI is a “huge mistake,” arguing that consumers and governments will refuse to embrace AI if it’s under the control of a couple of powerful American companies. LeCun, along with Geoffrey Hinton and Yoshua Benjio, received the 2018 Turing Award, — often considered the “Nobel Prize of Computing,” — for their pioneering work on deep learning.
Meta’s open source approach to AI isn’t novel. After all, the infrastructures underlying the Internet and World Wide Web were all built on open source software, as has the widely used Linux operating system. And, in September, 2022, Meta contributed its PyTorch machine learning framework to the Linux Foundation to further accelerate the development and accessibility of the technology.
“While our models still hold a slight edge in terms of quality, the gap is closing astonishingly quickly,” said the memo. “Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months. This has profound implications for us.” The memo made a number of additional points, including:
The arguments in the leaked Google memo aren’t new. “The history of technology is littered with battles between open source and proprietary, or closed, systems,” noted the NYT article. “Some hoard the most important tools that are used to build tomorrow’s computing platforms, while others give those tools away. … Many companies have openly shared their A.I. technologies in the past, at the insistence of researchers. But their tactics are changing because of the race around A.I.”
The current discussions on the safety of proprietary versus open source AI models bring back memories of similar debates in the late 1990s regarding the use of open source software in supercomputing systems because of their critical importance to US national security and science and engineering research.
At the time I was the industry co-chair of the President’s Information Technology Advisory Committee (PITAC), where I served alongside our academic co-chair, CMU professor Raj Reddy, and 22 other members split evenly between industry and academia. The use of open source software in supercomputing was still fairly new so in October of 1999 the PITAC convened a Panel on Open Source Software for High End Computing that included experts from universities, federal agencies, national laboratories, and supercomputing vendors.
The Panel held a number of meetings and public hearings over the next year, and released its final report in October of 2000: “Developing Open Source Software to Advance High-End Computing.” I went back and re-read the report as well as the Transmittal Letter to the President. Two paragraphs seem particularly relevant to our current discussions:
Is open source a viable strategy for producing high quality software? “The PITAC believes the open source development model represents a viable strategy for producing high quality software through a mixture of public, private, and academic partnerships. This open source approach permits new software to be openly shared, possibly under certain conditions determined by a licensing agreement, and allows users to modify, study, or augment the software’s functionality, and then redistribute the modified software under similar licensing restrictions. By its very nature, this approach offers government the additional promise of leveraging its software research investments with expertise in academia and the private sector.”
Should open source software be used in the development of highly sensitive systems? “Open source software may offer potential security advantages over the traditional proprietary development model. Specifically, access by developers to source code allows for a thorough examination that decreases the potential for embedded trap doors and/or Trojan horses. In addition, the open source model increases the number of programmers searching for software bugs and subsequently developing fixes, thus reducing potential areas for exploitation by malicious programmers.”
Around the same time, Linux was picking up steam in the commercial marketplace and IBM was seriously considering whether to strongly embrace the fast growing operating system. To help us finalize the decision, we launched two major corporate studies in the second half of 1999, one focused on the use of Linux in scientific and engineering computing, and the second on Linux as a high-volume platform for Internet applications and application development. Toward the end of the year, both studies strongly recommended that IBM should embrace Linux across all its product lines, that we should work closely with the open Linux community as a partner in the development of Linux, and that we should establish an IBM-wide organization to coordinate Linux activities across the company.
The recommendations were accepted, and I was asked to organize and lead the new IBM Linux initiative. On January 10, 2000, we announced IBM’s embrace of Linux across the company. Our announcement got a somewhat mixed reception. Many welcomed our strong support of Linux and open source communities. But in January, 2000 Linux was still not all that well known in the commercial marketplace. A number of our customers were perplexed that a company like IBM was so aggressively supporting an initiative that, in their opinion, was so far removed from the IT mainstream.
Over the next year we spent quite a bit of time explaining why IBM was supporting Linux and open source communities. On February 3, 2000, I gave a keynote presentation at the LinuxWorld Conference in New York, where I said that we did not view Linux as just another operating system any more than we viewed the Internet as just another network when we announced the IBM Internet Division four years earlier. We viewed Linux as part of the long term evolution toward open standards that would help integrate systems, applications and information over the Internet.
The number of open source projects and developers has truly exploded over the past two decades as evidenced by the impressive scope of the Linux Foundation (LF), whose predecessor, the Open Source Development Labs (OSDL), IBM helped organize in 2000 along with HP, Intel, and several other companies.
A recent article in The Economist, “What does a leaked Google memo reveal about the future of AI?,” reminds us that while the Internet and several related technologies run on open-source software, the basic Internet infrastructure continues to support lots and lots of applications, platforms, and tools built on proprietary software. “AI may find a similar balance” between open-source AI models and proprietary models built on top of them based on a company’s proprietary data and algorithms. “Yet even if the memo is partly right, the implication is that access to AI technology will be far more democratised than seemed possible even a year ago. Powerful LLMs can be run on a laptop; anyone who wants to can now fine-tune their own AI.”
“This has both positive and negative implications,” explained The Economist. “On the plus side, it makes monopolistic control of AI by a handful of companies far less likely. It will make access to AI much cheaper, accelerate innovation across the field and make it easier for researchers to analyse the behaviour of AI systems (their access to proprietary models was limited), boosting transparency and safety. But easier access to AI also means bad actors will be able to fine-tune systems for nefarious purposes, such as generating disinformation. It means Western attempts to prevent hostile regimes from gaining access to powerful AI technology will fail. And it makes AI harder to regulate, because the genie is out of the bottle.”
Getting back to my original questions, is it safe to leverage open source software and communities in the development of highly sensitive, powerful AI systems? While entering a new era of computing, it feels like we’re revisiting settled questions about the safety of open source systems, questions that have repeatedly been tested and validated in the intervening decades: the open source development model represents a viable strategy for producing high quality software; and open source software offers potential security advantages over the traditional proprietary development model.