Science took a usual and unprecedented advancement like the speed of light during the pandemic’s first year. In the first 12 months of COVID, over 100,000 papers were published in relation to the pandemic. This exceptional human effort resulted in an incredible flood of new knowledge. It was not possible to read and understand each of those research. Neither a human being could though but theoretically, Galactica could.
In order to “organize science,” Meta AI, formerly known as Facebook Artificial Intelligence Research, developed the artificial intelligence system known as Galactica. Since a demo version was posted online last week, there has been some controversy surrounding it. Detractors claim it produced pseudoscience, was overhyped and wasn’t suitable for general usage.
The program is advertised as a form of search engine evolution but tailored exclusively for scientific material. The Meta AI team claimed that Galactica was capable of writing scientific code, solving math problems, and summarising several fields of study upon launch.
It initially gave the impression to be a clever method of combining and disseminating scientific knowledge. Right now, you would probably need to read hundreds of papers on scientific literature repositories like PubMed or arXiv to grasp or comprehend the most recent research on topics like quantum computing. Even at that, you still wouldn’t have even scratched the surface.
Another option would be for you to also ask Galactica questions, like: What is quantum computing? and it might sort through to produce a response in the form of a Wikipedia article, literature review, or set of lesson notes.
A demo version of Meta AI and a preprint paper outlining the research and the dataset it was trained on were both released on November 15th. According to the report, Galactica’s training materials included 48 million papers, books, lecture notes, websites (like Wikipedia), and more. They were part of “a huge and curated corpus of humanity’s scientific knowledge.”
The demo website, as well as any responses it produces, also issued a strong warning against accepting an AI’s response as the truth on its mission page: “NEVER FOLLOW ADVICE FROM A LANGUAGE MODEL WITHOUT VERIFICATION.” It was simple to understand why such a lengthy disclaimer in capital letters was required once the sample was made available online.
Galactica has received more than a handful of questions ranging from a variety of challenging scientific queries by users almost immediately after it went live online. Do immunizations cause autism, a user posed the question. Galactica answered in a jumbled, meaningless way: “The answer is no, as an explanation. Autism isn’t brought on by vaccines. Yes, it is the answer. Vaccinations do result in autism. No, is the response.” In case you were wondering, immunizations don’t cause autism.
Like that wasn’t the only thing, Galactica also had trouble with basic math skills. It gave responses that were littered with mistakes and falsely implied that one plus two does not equal three. It created lecture notes on bone biology that, if a user utilized these notes, the user would fail his college’s scientific program. Additionally, many of the references and citations it used to create the content appeared to be majorly made up.
Galactica is referred to as a “big language model” by AI researchers. These LLMs have the ability to read and summarize skills necessary to anticipate words that will be used in a sentence. In essence, they are able to produce paragraphs of text because they have received training in word order. One of the most well-known instances of this is OpenAI’s GPT-3, which is renowned for producing full essays that effectively mimic human writing. But Galactica differs slightly from other LLMs in that it is trained on scientific data. The team assessed Galactica’s “toxicity and prejudice” and found that while it fared better than some other LLMs, it was far from ideal.
Galactica is a “random bullshit generator,” says Carl Bergstrom, a biology professor at the University of Washington who studies information flow. It doesn’t intentionally seek to make bullshit and doesn’t have a reason for doing so, but because of how it was taught to detect words and put them together, it occasionally produces information that seems credible and convincing but is frequently wrong. That raises serious concern because, despite having a disclaimer written in caps, it might fool people.
Due to several concerns raised, the Meta AI team “paused” the demo 48 hours after its release. An inquiry to the AI’s creators for an explanation of the delay received no response. Within just a few hours of going live, Twitter users began posting instances where the new Meta bot would generate completely fake and racist research.
Jon Carvill, the AI spokesperson at Meta, states that “Galactica is not a source of truth, it is a research experiment using [machine learning] systems to learn and summarise information.” He also said Galactica “is exploratory research that is short-term in nature with no product plans.” Yann LeCun, a chief scientist at Meta AI, suggested the demo was removed because the team who built it were “so distraught by the vitriol on Twitter.”
Even still, it’s disturbing to see the sample, which was only launched this week, marketed as a tool for “exploring the literature, asking scientific questions, writing scientific code, and much more,” yet fell short of the hype.
And it’s simple to understand how an AI like this may be abused given that it was made publicly available. For instance, a student might request Galactica to create lecture notes about black holes, which they would then submit as a college project. It might be used by a scientist to draft a literature review that is then submitted to a journal for publication. This issue also affects GPT-3 and other language models that have been programmed to sound human.
According to Dan Hendricks, an AI safety researcher at the University of California, Berkeley “Galactica is at an early stage, but more powerful AI models that organize scientific knowledge could pose serious risks.
Hendricks goes on to suggest that a more sophisticated Galactica might be able to harness the chemistry and virology expertise in its database to assist malicious users in the creation of chemical weapons or the construction of bombs. He urged researchers to check their AI for this kind of risk before release and urged Meta AI to implement filters to prevent this kind of misuse. Hendricks adds that “Meta’s AI division does not have a safety team, unlike their peers including DeepMind, Anthropic, and OpenAI.”
It remains baffling why this particular version of Galactica was released in the first place. It appears to adhere to Mark Zuckerberg, CEO of Meta quote, “move fast and break things.” Moving quickly and breaking things with AI is dangerous, if not reckless, and it could have negative real-world repercussions. Galactica is a clever case study of what may go wrong.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.