2024诺贝尔奖得主访谈录- 杰弗里·辛顿讲述人工智能灵感来源

发布时间:2024-10-10 01:50  浏览量:18

04:35

So I remember when I first got to Carnegie Mellon from England, in England at a research unit, it would get to be six o'clock and you'd all go for a drink in the pub.

我记得当我第一次从英国来到卡内基梅隆大学时,在英国的一个研究单位, 天刚好六点, 大家就会去酒吧喝酒。

Um, at Carnegie Mellon, I remember after I'd been there a few weeks, it was Saturday night.

在卡内基梅隆大学,我记得我到那里几个星期后,那是星期六晚上。

I didn't have any friends yet and I didn't know what to do.

我还没有朋友,而且我不知道该怎么办。

So I decided I'd go into the lab and do some programming 'cause I had a list machine and you couldn't program it from home.

所以我决定去实验室做一些编程,因为我有一台列表机,你无法在家里对它进行编程。

So I went into the lab at about nine o'clock on a Saturday night and it was swarming.

于是我在星期六晚上九点左右走进实验室,那里人头攒动。

All the students were there and they were all there because what they were working on was the future.

所有的学生都聚集在了这里,他们聚集在这里是因为他们所努力的目标就是未来。

They all believed that what they did next was gonna change the course of computer science and it was just so different from England.

他们都相信他们接下来的所作所为将会改变计算机科学的进程,这与英国的情况截然不同。

And so that was very refreshing.

这非常令人耳目一新。

Take me back to the very beginning, Geoff at Cambridge.

我们回到最初,谈谈剑桥的杰夫。

Uh, trying to understand the brain.

当时尝试去了解大脑。

Uh, what was that like?

那种感觉怎么样?

It was very disappointing.

这非常令人失望。

So I did physiology and in the summer term they were gonna teach us how the brain worked and it, all they taught us was how neurons conduct action potentials, which is very interesting, but it doesn't tell you how the brain works.

所以我学习了生理学, 在暑假期间他们会教我们大脑是如何工作的,他们教我们的只是神经元如何传导动作电位,这非常有趣, 但它并没有告诉你大脑是如何工作的。

So that was extremely disappointing.

所以这真是让人很沮丧。

I switched to philosophy then, I thought maybe they'd tell us how the mind worked and that was very disappointing.

后来我转到了哲学专业,我想也许他们会告诉我们大脑是如何运作的,但我还是很失望。

I eventually ended up going to Edinburgh to do AI and that was more interesting.

我最终去了爱丁堡从事人工智能研究,这更有趣。

At least you could simulate things so you could test out theories.

至少你可以模拟事物,这样你就可以测试理论。

And did you remember what intrigued you about AI?

您还记得是什么让您对人工智能产生兴趣吗?

Was it a paper?

是一篇论文?

Was it any particular person that exposed you to those ideas?

还是某个人向您介绍了这些想法?

I guess it was a book I read by Donald Hebb that influenced me a lot.

我想是我读过的唐纳德·赫布写的一本书,这对我影响很大。

Um, he was very interested in how you learn the connection strengths in neural nets.

他对如何了解神经网络中的连接强度非常感兴趣。

I also read a book by John von Neumann early on, um, who was very interested in how the brain computes and how it's different from normal computers.

我早期也读过约翰·冯·诺依曼的一本书,他对大脑如何计算以及它与普通计算机的区别非常感兴趣。

And did you get that conviction that this ideas would work out at that point, or what was your intuition back in the Edinburgh days?

当时您是否确信这些想法会实现,或者您在爱丁堡时的感觉是什么?

It seemed to me there has to be a way that the brain learns and it's clearly not by having all sorts of things programmed into it and then using logical rules of inference, that just seemed to me crazy from the outset.

在我看来,大脑一定有一种学习的方式, 而这种方式显然不是通过将各种各样的东西编入大脑, 然后使用逻辑推理规则来学习,这种方式从最初就让我感觉很疯狂。

Um, so we had to figure out how the brain learned to modify connections in a neural net so that it could do complicated things.

所以我们必须弄清楚大脑如何学会修改神经网络中的连接,以便能够做复杂的事情。

And von Neumann believed that.

冯·诺依曼相信这一点。

Turing believed that.

图灵相信这一点。

So von Neumann and Turing were both pretty good at logic, but they didn't believe in this logical approach.

因此, 冯·诺依曼和图灵都非常擅长逻辑,但他们不相信这种逻辑方法。

And what was your split between studying the ideas from, from neuroscience and just doing what seemed to be good algorithms for, for AI?

您是如何将研究神经科学的想法和只研究看似对人工智能有用的算法两种方法区分开来的?

How much inspiration did you take early on?

您早期受到了多少启发?

So I never did that much studying in neuroscience.

所以我从未在神经科学方面做过那么多研究。

I was always inspired by what I learned about how the brain works.

有关大脑运作原理的知识总是能给我带来启发。

That there's a bunch of neurons, they perform relatively simple operations, they're nonlinear, um, but they collect inputs, they weight them and then they give an output that depends on that weighted input.

有一堆神经元, 它们执行相对简单的操作,它们是非线性的, 但它们通过收集输入,对输入进行加权, 然后给出取决于该加权输入的输出。

And the question is how do you change those weights to make the whole thing do something good?

问题是如何改变这些权重以使整个事物发挥作用?

It seems like a fairly simple question.

这看上去是一个相当简单的问题。

What collaborations do you remember from from that time?

您还记得当时与哪些专家进行合作吗?

The main collaboration I had at Carnegie Mellon was with someone who wasn't at Carnegie Mellon.

我在卡内基梅隆大学的主要合作对象并不是本校的人。

I was interacting a lot with Terry Sejnowski who was in Baltimore at Johns Hopkins.

我和位于巴尔的摩的约翰霍普金斯大学的特伦斯·谢诺夫斯基有很多交流。

And about once a month, either he would drive to Pittsburgh or I would drive to Baltimore.

大约每月一次,他要么开车去匹兹堡,要么我开车去巴尔的摩。

It's 250 miles away and we would spend a weekend together working on Boltzmann machines.

那里离我们有 250 英里远,我们会一起度过一个周末来研究玻尔兹曼机。

That was a wonderful collaboration.

这是一次非常美妙的合作。

We were both convinced it was how the brain worked.

我们都确信这就是大脑的工作方式。

That was the most exciting research I've ever done.

这是我做过的最令人兴奋的研究。

And a lot of technical results came out that were very interesting, but I think it's not how the brain works.

虽然已经取得了很多非常有趣的技术成果,但是我认为这并不是大脑的工作方式。

Um, I also had a very good collaboration with um, Peter Brown, who was a very good statistician and he worked on speech recognition at IBM and then he came as a more mature student to Carnegie Mellon just to get a PhD.

我还与彼得·布朗进行了非常好的合作,他是一位非常优秀的统计学家, 他曾在 IBM 从事语音识别工作, 后来作为一名更成熟的学者来到卡内基梅隆大学攻读博士学位。

Um, but he already knew a lot.

但是他已经掌握很多了。

He taught me a lot about speech and he in fact taught me about hidden Markov models.

他教了我很多关于演讲中的知识,事实上他教了我隐马尔可夫模型。

I think I learned more from him than he learned from me.

我想我从他身上学到的东西比他从我身上学到的要多。

That's the kind of student you want.

这就是你想要的那种学生。

And when he taught me about hidden Markov models, I was doing backdrop with hidden layers and they weren't called hidden layers then.

当他教我隐马尔可夫模型时,我正在用隐藏层做背景, 当时它们还不叫隐藏层。

And I decided that name they use in Hidden Markoff models is a great name for variables that you dunno what they're up to.

我认为隐马尔可夫模型中使用的名称非常适合那些你不知道它们是什么的变量。

Um, and so that's where the name 'hidden' in neural nets came from me and Peter decided that was a great name for the hidden layers in neural nets.

这就是神经网络中‘隐藏’这个名字的由来,而彼得认为这是神经网络中隐藏层的一个好名字。

Um, but I learned a lot from Peter about speech.

但是我从彼得那里学到了很多关于演讲的知识。

外部推荐