skip to content

What is ‘Openwashing’ and why are AI companies like OpenAI are accused of this practice

The tech community is heatedly discussing the concept of “open source” for artificial intelligence (AI) models.

Elon Musk, a key figure in the founding of OpenAI in 2015, has even taken legal action against the startup and its CEO, Sam Altman, alleging that they’ve strayed from the original mission of openness.

Meanwhile, the Biden administration is actively exploring the potential risks and rewards associated with open-source models.

Supporters of open-source AI argue that it fosters fairness and safety for society. However, critics warn that these models could be exploited for nefarious purposes. One major stumbling block in this debate is the lack of a clear definition of what “open-source AI” truly entails.

Some accuse AI companies of “openwashing,” essentially misusing the label to portray themselves in a favorable light. This term has been previously aimed at coding projects that have been overly liberal with the open-source label.

In a recent blog post on Open Future, an organization dedicated to promoting open sourcing, Alek Tarkowski emphasized the need for robust safeguards against corporations’ attempts at openwashing.

Similarly, the Linux Foundation, a nonprofit that advocates for open-source software projects, has cautioned against the trend of openwashing, warning that it undermines the fundamental principle of openness—the free exchange of knowledge to facilitate inspection, replication, and collective progress.

Applying the “open source” label to AI models can vary widely among organizations. For instance, while OpenAI, the company behind the ChatGPT chatbot, provides limited information about its models despite its name, Meta labels its LLaMA 2 and LLaMA 3 models as open source but imposes certain restrictions on their usage.

The most transparent models, typically developed by nonprofits, divulge the source code and the underlying training data and utilize an open-source license that permits broad reuse. However, even with these models, there are significant barriers to replication.

The primary challenge stems from the fact that building an AI model entails more than just coding—it requires substantial resources in terms of computing power and data curation, which only a handful of companies possess.

As a result, some experts argue that labeling any AI as “open source” is, at best, misleading and, at worst, a mere marketing ploy.

“Even the most transparent AI systems do not grant open access to the resources needed to democratize AI access or facilitate thorough scrutiny,” explains David Gray Widder, a postdoctoral fellow at Cornell Tech who has extensively researched the use of the “open source” label by AI companies.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed