Will Douglas Heaven entrevista en Technology Review del MIT a Geoffrey Hinton, sobre su actual desconfianza en la Inteligencia Artificial:
Hinton
fears that these tools are capable of figuring out ways to manipulate
or kill humans who aren’t prepared for the new technology.
“I
have suddenly switched my views on whether these things are going to be
more intelligent than us. I think they’re very close to it now and they
will be much more intelligent than us in the future,” he says. “How do
we survive that?”
He is especially worried that people could
harness the tools he himself helped breathe life into to tilt the scales
of some of the most consequential human experiences, especially
elections and wars.
“Look, here’s one way it could all go wrong,”
he says. “We know that a lot of the people who want to use these tools
are bad actors like Putin or DeSantis. They want to use them for winning
wars or manipulating electorates.”
Hinton believes that the next
step for smart machines is the ability to create their own subgoals,
interim steps required to carry out a task. What happens, he asks, when
that ability is applied to something inherently immoral?
“Don’t
think for a moment that Putin wouldn’t make hyper-intelligent robots
with the goal of killing Ukrainians,” he says. “He wouldn’t hesitate.
And if you want them to be good at it, you don’t want to micromanage
them—you want them to figure out how to do it.”
There are already
a handful of experimental projects, such as BabyAGI and AutoGPT, that
hook chatbots up with other programs such as web browsers or word
processors so that they can string together simple tasks. Tiny steps,
for sure—but they signal the direction that some people want to take
this tech. And even if a bad actor doesn’t seize the machines, there are
other concerns about subgoals, Hinton says.
“Well, here’s a
subgoal that almost always helps in biology: get more energy. So the
first thing that could happen is these robots are going to say, ‘Let’s
get more power. Let’s reroute all the electricity to my chips.’ Another
great subgoal would be to make more copies of yourself. Does that sound
good?”
Maybe not. But Yann LeCun, Meta’s chief AI scientist,
agrees with the premise but does not share Hinton’s fears. “There is no
question that machines will become smarter than humans—in all domains in
which humans are smart—in the future,” says LeCun. “It’s a question of
when and how, not a question of if.”
But he takes a totally
different view on where things go from there. “I believe that
intelligent machines will usher in a new renaissance for humanity, a new
era of enlightenment,” says LeCun. “I completely disagree with the idea
that machines will dominate humans simply because they are smarter, let
alone destroy humans.”
“Even within the human species, the
smartest among us are not the ones who are the most dominating,” says
LeCun. “And the most dominating are definitely not the smartest. We have
numerous examples of that in politics and business.”
Yoshua Bengio,
who is a professor at the University of Montreal and scientific
director of the Montreal Institute for Learning Algorithms, feels more
agnostic. “I hear people who denigrate these fears, but I don’t see any
solid argument that would convince me that there are no risks of the
magnitude that Geoff thinks about,” he says. But fear is only useful if
it kicks us into action, he says: “Excessive fear can be paralyzing, so
we should try to keep the debates at a rational level.”
LeCun
es muy optimista...si no fuera por los drones sobre Kiev, la prisión de
Navalni, o las medidas de control social de China, quizá se podría
aceptar su visión.