Meta’s AI division recently unveiled a demo of Galactica, a large language model designed by the engineers at Meta that can “store, combine and reason about scientific knowledge.”
The main aim of the Galactica AI model was to assist scientific research by helping scientists write scientific literature in a speedy process. However, users running tests on the model found that the AI model could generate a bunch of realistic-sounding pieces of text that amounted to nonsense. The model would also create readers that would be scientifically inaccurate and, in some instances, downright racist.
While some people found the demo promising and valuable, others soon discovered that anyone could type in racist or potentially offensive prompts, generating authoritative-sounding content on those topics just as quickly. For example, someone used it to author a wiki entry about a fictional research paper titled “The benefits of eating crushed glass.”
As a result, Meta pulled the Galactica demo Thursday. Afterward, Meta’s Chief AI Scientist Yann LeCun tweeted, “Galactica demo is offline for now. It’s no longer possible to have some fun by casually misusing it.”
The episode recalls a common ethical dilemma with AI: When it comes to potentially harmful generative models, is it up to the general public to use them responsibly or for the publishers of the models to prevent misuse?
Where the industry practice falls between those two extremes will likely vary between cultures as deep learning models mature. Ultimately, government regulation may play a significant role in shaping the answer.