AI godfather Hinton’s family tree revealed! All scientific masters! He once said frankly: My pursuit in life is just to graduate with a Ph.D., and work is my only relaxation…

Click the Card below and follow the “CVer” public account

AI/CV heavy-duty information, delivered as soon as possible

Click to enter->[Computer Vision and Transformer] Communication Group

Reprinted from: Xinzhiyuan | Editor: Run So sleepy

Scan the QR code to join CVer Knowledge Planet, you can quickly learn the paper ideas from the latest top conferences and journalsand CV from entry to Proficient in information, as well as cutting-edge projects and applications! Highly recommended for paper submission!

62af9f8892201a48dd4887ccaffdbfa8.jpeg

[Introduction] Recently, Hinton, who seemed to have retired for a long time, has stood on the forefront of AI again, and his family tree has once again aroused heated discussions among netizens.

Recently, the huge differences in attitudes towards AI supervision have once again brought the “Turing Big Three” into the spotlight.

Among them, Geoffrey Hinton’s family tree has also aroused heated discussions among netizens–

It is no exaggeration to say that the entire family is almost a scientific giant who promotes the development of human civilization.

c5f56576bf2860daef0705500eb8181b.png

A family of top academics

In 1947, Geoffrey Hinton was born in Wimbledon, England.

As a child, Hinton’s mother told him: “Either you be a scholar or you are a loser.”

83b44bd7bd9a8dc8464ecba4b748b840.png

Hinton’s family has been full of accomplished scientists for generations, just like himself.

Hinton’s great-great-grandfather was George Boole, the founder of Boolean logic and algebra. Boolean logic later became the mathematical foundation of modern computers.

a4433b355b1ba46d90ee01d8db65a23d.png

George Boole

His wife, Mary Boole, was, like George, a self-taught mathematician and teacher of algebra and logic.

After marrying George, Mary also began to advise him on his work, which was unheard of among women in the mid-19th century.

She even edited George’s book, “The Laws of Thought,” which presented his theory of Boolean logic.

Mary’s uncle, the Surveyor General, was a geographer and director of the Survey of India, and Mount Everest is named after him.

978a2c6408f9eedf65b8ea7c53b9859f.png

George Everest

Geoffrey’s great-uncle Sebastian Hinton was the inventor of the jungle gym.

732cd7d758758a011cd848d8ad0eb9d0.png

Boole’s son and Geoffrey’s great-grandfather, Charles Howard Hinton, was a mathematician and fantasy novelist who created the concept of four-dimensional space. Charles’ Cosmic Cube concept still appears in comic books and Marvel movies.

One of Geoffrey’s aunts, Joan Hinton, was a nuclear physicist who joined the Manhattan Project as a student of physicist Fermi and was one of the few women to participate in the Manhattan Project.

Moreover, young Joan had already qualified for the U.S. Olympic team. She would have participated in the 1940 Olympics if the 1940 Olympics had not been cancelled.

In 1948, alarmed by the looming Cold War, she gave up physics and left the United States for China.

Joan Hinton, who lives in Beijing, and her husband have translated many foreign works and designed and manufactured automated pasteurized milk lines for domestic farms.

54c635957d9d7cb752ed54bdce7050ab.png

Geoffrey’s great-aunt Ethell Lilian Voynich was a writer and musician best known for writing “The Gadfly”.

Geoffrey’s father, Howard Hinton, was an entomologist who studied Mexican beetles and was elected a fellow of the Royal Society.

Geoffrey has said that he was eventually forced to quit academia due to pressure from his family. His father often told him: “Work hard and maybe when you are twice my age, you will be half as good as me.”

Work really hard and maybe when you’re twice as old as me, you’ll be half as good.

Growing experience

Hinton grew up with three siblings in a large house in Bristol filled with animals. There was a meerkat at home and a venomous snake in a hole in the garage.

Hinton recalled that when he was a child, he often used things to tease venomous snakes. One time, the venomous snake missed his hand by an inch and almost killed him.

bc668fc6d220543997453d93888f26f5.png

Eight-year-old Hinton holds a python at Bristol Zoo

Hinton recalled that when he was four years old when his curiosity about the world was born, he and his mother took a bus to the countryside. There is a reclining seat on the car.

He took a coin out of his pocket and placed it on the seat, but the coin did not slide back and seemed to move upward against gravity. The elusive coin piqued Hinton’s curiosity, and the problem plagued him for 10 years.

When he was a teenager, he discovered that the coin’s unusual movement was due to the vibrations of the bus interacting with the fibers in the velvet seat cover – a discovery that gave him a great sense of accomplishment.

“Some people see things they don’t understand and can take it calmly. But I can’t stand it at all. I have to figure things out,” Hinton said.

Hinton’s mother was loving, but his father was very strict. The demands on Hinton were both physical (he could do pull-ups with one hand, a feat that left the skinny kid in awe) and intellectually.

“He likes to think clearly, and if you say something that doesn’t make sense, he will be very unhappy. He is not an emotional thinker. Although he did not abuse me, he was very strict.” Hinton said of his father.

In the 1970s, Hinton worked odd jobs and carpentry while earning a degree in experimental psychology.

In 1972, he began studying for a PhD in AI but felt frustrated and ambivalent about his studies.

One weekend, he participated in a group discussion with eight people, where everyone was asked to open up and explore their own desires and pursuits.

On the last day, each participant had to declare what they truly wanted in life.

Everyone talked about a lot of wild life goals. But when Hinton arrived, he was stunned and didn’t know what to say.

Hinton finally held it in for a long time and blurted out an answer that even surprised him: “What I really want is a Ph.D.!” he shouted.

This sentence once again ignited his enthusiasm for neural network research.

f73eecd9d7386126908284a4092d369c.png

Asked what it was like to grow up in the shadow of this extraordinary family history, Hinton said: “Pressure, it felt like a lot of pressure.”

He said he had struggled with depression his entire life and that work was his way of relieving it.

When deep learning became successful, the depression eased slightly. “For a long time,” he said, “I felt like I wasn’t-well, I finally did, and that’s a relief.”

Both of Hinton’s wives died of cancer, and he spent much of his life in the hospital.

He knows firsthand the frustration patients feel as they wait for results and get vague information.

But unlike most, he also knew there would soon be technology that could cut the week-long wait for test results to just one day.

Hinton, a reserved Brit who usually leaves the publicity for artificial intelligence to others, is genuinely excited about the potential of deep learning to revolutionize healthcare.

“I see a lot of inefficiencies in the way medical professionals use data. A lot of the information in patient records is not used. Doctors’ interpretations of CT results vary widely. If two radiologists look at the same The scan results will most likely yield completely different results.”

Dedicate your life to deep learning

In 2018, Geoffrey Hinton, together with Yoshua Bengio and Yann LeCun, won the Turing Award in recognition of their groundbreaking contributions to the field of deep learning.

f717d7c40e5b412fdb30de4f1930c3f8.png

In 2012, Hinton and his students published a groundbreaking paper titled “Deep Neural Networks for Acoustic Modelling in Speech Recognition.” The team includes four different research groups from technology giants such as Microsoft, IBM, Google, and the University of Toronto.

The paper proved for the first time that neural networks were the most advanced technology at the time – outperforming classic models such as Hidden Markov Models (HMM) and Gaussian Mixture Models (GMM) in identifying speech patterns.

This year is the year of breakthrough for artificial intelligence.

b8c8eca86030ee63249121c0e27aba21.png

Paper address: https://research.google/pubs/pub38131/

In fact, Hinton’s history with neural networks is much deeper than we know.

When Frank Rosenblatt proposed the world’s first neural network machine, the “Perceptron” in the 1950s, it could only solve linearly separable functions and was helpless when faced with NOR or NXOR functions.

In addition, there were huge differences between perceptrons and the traditional symbolic methods that Marvin Minsky was using at the time.

40b4a0c3ab8216af3c63b61cbe29570e.png

In 1969, Minsky wrote a paper titled “Perceptrons: An Introduction to Computational Geometry”, pointing out the limitations of the perceptron.

This paper also led to the arrival of what is known as the first “artificial intelligence winter.”

f6f95f8397313759a6585d60d1b5a934.png

In 1972, Hinton was studying for a PhD at the University of Edinburgh, with neural networks being the focus of his research.

However, in academia, neural networks are generally regarded as a fringe subject. Although his mentor told him every week that it was a waste of time, Hinton persisted.

In Hinton’s eyes, the idea of neural networks is not wrong. The main problem is power. Computers at the time couldn’t process millions of images and make connections.

In 1986, he co-published a paper titled “Learning representations by back-propagating errors” with David Rumelhart and Ronald Williams.

4624e0a088ddcc9f3f6d16b677662411.png

Paper address: http://www.cs.utoronto.ca/~hinton/absps/naturebp.pdf

The paper proves that multiple hidden layers in a neural network can learn any function, thereby solving the problem of a single-layer perceptron.

The algorithm uses the network’s loss function and backpropagation error to update the parameters of the lower layer. This is the famous universal approximation theorem.

f5d16e31de2d34dcb57a460147e62630.png

In 1987, Hinton accepted an invitation from the Canadian Institute for Advanced Study (CIFAR) and started a project called “Machine and Brain Learning” at the University of Toronto.

CIFAR encouraged research around non-mainstream scientific ideas that might not find supporters elsewhere, and provided Hinton with academic freedom and a generous salary.

Over time, some deep learning colleagues began to collaborate with Hinton. Among them was Ilya Sutskever, who later became the co-founder of OpenAI.

Recalling when he joined Hion Labs in the early 2000s, Ilya said that it was the “artificial intelligence winter” at that time. Job opportunities and funds in the field of artificial intelligence research were scarce, and invitations from industry were even fewer.

56fc1cc48076a61203c1f50d0b11e51e.png

Ilya Sutskever (left), Alex Krizhevsky (middle), Geoffrey Hinton (right)

Around 2009, when computers finally had the ability to mine vast pools of data, super-powerful neural networks began to outperform logic-based artificial intelligence in speech and image recognition.

It didn’t take long for the industry to take notice. Large technology companies such as Microsoft, Facebook, and Google have begun investing.

In 2012, Google’s top-secret laboratory Google X (later renamed X) announced that it had built a neural network composed of 16,000 processors and put it on YouTube.

d6bf25cf98f342f1ef39b66fd6e4c764.png

Google Brain, which is responsible for deep learning and artificial intelligence research, and led by senior researcher Jeff Dean, fed millions of random, unlabeled video frames from YouTube into the new supercomputer and programmed it , enabling it to understand what it sees.

Based on the vast amount of cat videos on YouTube, this neural network can eventually identify cats and other things. This is an exciting time for the field of artificial intelligence. At that time, Dean said excitedly: “We never told it ‘this is a cat’ during the training process. It can be said that it invented the concept of cat on its own.”

This breakthrough pushed Hinton and his followers to the forefront of artificial intelligence.

cef476286ec8fc27edf6c9627423372a.png

Also in 2012, Hinton and two other researchers won the ImageNet competition. Their computer vision system based on neural networks was able to recognize 1,000 objects.

5b313f82d3c78915566715b6d8fab5d3.png

Paper address: https://dl.acm.org/doi/10.1145/3065386

In 2013, Hinton’s company DNNresearch was acquired by Google, and he himself was recruited by Dean to work part-time at Google.

However, just 10 years later, in 2023, Geoffrey Hinton, the dean of deep learning and the father of neural networks, suddenly announced his resignation.

1911b3cfd674edce06e6a82dce9eda6c.png

With the rapid development of generative AI such as GPT-4, Hinton said worriedly, “We have almost learned how computers improve themselves. This is very dangerous. We must seriously consider how to control it.”

From the pioneer of artificial intelligence to the prophet of doomsday, Hinton’s transformation also marks that the technology industry is at the most important turning point in decades.

5d68882aee16d7eee3dc45d7ccb04c24.png

In Hinton’s view, we haven’t yet found a way to prevent bad people from using it to do bad things.

After choosing to leave Google, Hinton can finally speak freely about the risks of AI.

I deeply regret my life’s work.

I can only comfort myself this way: Even if there is no me, there will be others.

References:

The AI superstars at Google, Facebook, Apple—they all studied under this guy

https://analyticsindiamag.com/geoffrey-hinton-its-all-in-the-family-tree/

ICCV/CVPR 2023 paper and code download


Backend reply: CVPR2023, you can download the CVPR 2023 papers and code open source paper collection

Backend reply: ICCV2023, you can download the collection of ICCV 2023 papers and code open source papers
Computer Vision and Transformer exchange group established
Scan the QR code below, or add WeChat: CVer444 to add CVer assistant WeChat, and then apply to join the CVer-Computer Vision or Transformer WeChat communication group. In addition, other vertical directions have been covered: target detection, image segmentation, target tracking, face detection & recognition, OCR, pose estimation, super-resolution, SLAM, medical imaging, Re-ID, GAN, NAS, depth estimation, automatic Driving, reinforcement learning, lane detection, model pruning & compression, denoising, fog removal, rain removal, style transfer, remote sensing images, behavior recognition, video understanding, image fusion, image retrieval, paper submission & communication , PyTorch, TensorFlow and Transformer, NeRF, etc.
Be sure to note: Research direction + location + school/company + nickname (such as target detection or Transformer + Shanghai + hand in + Kaka). Note according to the format to get passed and invited to the group faster


▲Scan the QR code or add WeChat ID: CVer444 to join the communication group
CVer Computer Vision (Knowledge Planet) is here! If you want to know about the latest, fastest and best CV/DL/AI paper express delivery, high-quality practical projects, AI industry cutting-edge, and learning tutorials from entry to mastery, please scan the QR code below and join CVer Computer Vision (Knowledge Planet). Nearly ten thousand people have been gathered!

▲Scan the QR code to join Planet Learning

▲Click on the card above to follow the CVer official account

It’s not easy to organize, please like and watch0f1baf54c9ac9c4b825fb53bc4bfc885.gif

The knowledge points of the article match the official knowledge files, and you can further learn relevant knowledge. Python entry skill treeHomepageOverview 388272 people are learning the system