domingo, diciembre 11, 2022

Galáctica y las dificultades de los modelos de lenguaje

En noviembre, Meta presentó un modelo de lenguaje bautizado Galactica, elaborado para asistir a investigadores científicos, pero sólo tres días después fue retirado de disponibilidad para ser consultado o testeado. Básicamente, como ha sucedido en otros campos de trabajo con inteligencia artificial (IA/AI), el lenguaje no reconoce verdad o falsedad. En las pruebas, trabajos formalmente presentados como científicos pero absurdos como la existencia de osos en el espacio, o las causas de la guerra de Ucrania, pasaron por buenos, con justificaciones razonadas.

Will Douglas Heaven, en Technology Review:

Galactica is a large language model for science, trained on 48 million examples of scientific articles, websites, textbooks, lecture notes, and encyclopedias. Meta promoted its model as a shortcut for researchers and students. In the company’s words, Galactica “can summarize academic papers, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more.”

(...) A fundamental problem with Galactica is that it is not able to distinguish truth from falsehood, a basic requirement for a language model designed to generate scientific text. People found that it made up fake papers (sometimes attributing them to real authors), and generated wiki articles about the history of bears in space as readily as ones about protein complexes and the speed of light. It’s easy to spot fiction when it involves space bears, but harder with a subject users may not know much about.

(...) Many scientists pushed back hard. Michael Black, director at the Max Planck Institute for Intelligent Systems in Germany, who works on deep learning, tweeted: “In all cases, it was wrong or biased but sounded right and authoritative. I think it’s dangerous.”

(...) The Meta team behind Galactica argues that language models are better than search engines. “We believe this will be the next interface for how humans access scientific knowledge,” the researchers write.  This is because language models can “potentially store, combine, and reason about” information. But that “potentially” is crucial. It’s a coded admission that language models cannot yet do all these things. And they may never be able to. “Language models are not really knowledgeable beyond their ability to capture patterns of strings of words and spit them out in a probabilistic manner,” says [Chirag Shah,  University of Washington]. “It gives a false sense of intelligence.”

 Grady Booch comenta: "Galactica is little more than statistical nonsense at scale. Amusing. Dangerous. And IMHO unethical". Algún investigador en ML (Yann LeCun, en el mismo hilo), se escandaliza por la calificación de no ético. Creo que a algunos científicos les falta medir el alcance de lo que tienen entre manos.

 

 

No hay comentarios.: