This website uses cookies to ensure you get the best experience on our website. Learn more

Daniel Kahneman: Deep Learning (System 1 and System 2) | AI Podcast Clips

x

Daniel Kahneman: Deep Learning (System 1 and System 2) | AI Podcast Clips

Full episode with Daniel Kahneman (Jan 2020):
Clips channel (Lex Clips):
Main channel (Lex Fridman):
(more links below)

Podcast full episodes playlist:


Podcasts clips playlist:


Podcast website:


Podcast on Apple Podcasts (iTunes):


Podcast on Spotify:


Podcast RSS:


Daniel Kahneman is winner of the Nobel Prize in economics for his integration of economic science with the psychology of human behavior, judgment and decision-making. He is the author of the popular book Thinking, Fast and Slow that summarizes in an accessible way his research of several decades, often in collaboration with Amos Tversky, on cognitive biases, prospect theory, and happiness. The central thesis of this work is a dichotomy between two modes of thought: System 1 is fast, instinctive and emotional; System 2 is slower, more deliberative, and more logical. The book delineates cognitive biases associated with each type of thinking.

Subscribe to this YouTube channel or connect on:
- Twitter:
- LinkedIn:
- Facebook:
- Instagram:
- Medium:
- Support on Patreon:
x

Daniel Kahneman: Thinking Fast and Slow, Deep Learning, and AI | Artificial Intelligence Podcast

Daniel Kahneman is winner of the Nobel Prize in economics for his integration of economic science with the psychology of human behavior, judgment and decision-making. He is the author of the popular book Thinking, Fast and Slow that summarizes in an accessible way his research of several decades, often in collaboration with Amos Tversky, on cognitive biases, prospect theory, and happiness. The central thesis of this work is a dichotomy between two modes of thought: System 1 is fast, instinctive and emotional; System 2 is slower, more deliberative, and more logical. The book delineates cognitive biases associated with each type of thinking. This conversation is part of the Artificial Intelligence podcast.

This conversation was recorded in the summer of 2019.

This episode is presented by Cash App. Download it & use code LexPodcast:
Cash App (App Store):
Cash App (Google Play):

INFO:
Podcast website:

Apple Podcasts:

Spotify:

RSS:

Full episodes playlist:

Clips playlist:


EPISODE LINKS:
Thinking Fast and Slow (book):

OUTLINE:
0:00 - Introduction
2:36 - Lessons about human behavior from WWII
8:19 - System 1 and system 2: thinking fast and slow
15:17 - Deep learning
30:01 - How hard is autonomous driving?
35:59 - Explainability in AI and humans
40:08 - Experiencing self and the remembering self
51:58 - Man's Search for Meaning by Viktor Frankl
54:46 - How much of human behavior can we study in the lab?
57:57 - Collaboration
1:01:09 - Replication crisis in psychology
1:09:28 - Disagreements and controversies in psychology
1:13:01 - Test for AGI
1:16:17 - Meaning of life

CONNECT:
- Subscribe to this YouTube channel
- Twitter:
- LinkedIn:
- Facebook:
- Instagram:
- Medium:
- Support on Patreon:
x

Daniel Kahneman: How Hard is Autonomous Driving? | AI Podcast Clips

Full episode with Daniel Kahneman (Jan 2020):
Clips channel (Lex Clips):
Main channel (Lex Fridman):
(more links below)

Podcast full episodes playlist:


Podcasts clips playlist:


Podcast website:


Podcast on Apple Podcasts (iTunes):


Podcast on Spotify:


Podcast RSS:


Daniel Kahneman is winner of the Nobel Prize in economics for his integration of economic science with the psychology of human behavior, judgment and decision-making. He is the author of the popular book Thinking, Fast and Slow that summarizes in an accessible way his research of several decades, often in collaboration with Amos Tversky, on cognitive biases, prospect theory, and happiness. The central thesis of this work is a dichotomy between two modes of thought: System 1 is fast, instinctive and emotional; System 2 is slower, more deliberative, and more logical. The book delineates cognitive biases associated with each type of thinking.

Subscribe to this YouTube channel or connect on:
- Twitter:
- LinkedIn:
- Facebook:
- Instagram:
- Medium:
- Support on Patreon:
x

Daniel Kahneman: Lessons from World War II | AI Podcast Clips

Full episode with Daniel Kahneman (Jan 2020):
Clips channel (Lex Clips):
Main channel (Lex Fridman):
(more links below)

Podcast full episodes playlist:


Podcasts clips playlist:


Podcast website:


Podcast on Apple Podcasts (iTunes):


Podcast on Spotify:


Podcast RSS:


Daniel Kahneman is winner of the Nobel Prize in economics for his integration of economic science with the psychology of human behavior, judgment and decision-making. He is the author of the popular book Thinking, Fast and Slow that summarizes in an accessible way his research of several decades, often in collaboration with Amos Tversky, on cognitive biases, prospect theory, and happiness. The central thesis of this work is a dichotomy between two modes of thought: System 1 is fast, instinctive and emotional; System 2 is slower, more deliberative, and more logical. The book delineates cognitive biases associated with each type of thinking.

Subscribe to this YouTube channel or connect on:
- Twitter:
- LinkedIn:
- Facebook:
- Instagram:
- Medium:
- Support on Patreon:
x

Yoshua Bengio: From System 1 Deep Learning to System 2 Deep Learning (NeurIPS 2019)

This is a combined slide/speaker video of Yoshua Bengio's talk at NeurIPS 2019. Slide-synced non-YouTube version is here:

This is a clip on the Lex Clips channel that I mostly use to post video clips from the Artificial Intelligence podcast, but occasionally I post clips from other lectures by me or others. Hope you find these interesting, thought-provoking, and inspiring. If you do, please subscribe, click bell icon, and share.

Lex Clips channel:


Lex Fridman channel:


Artificial Intelligence podcast website:


Apple Podcasts:


Spotify:


RSS:


Connect with on social media:
- Twitter:
- LinkedIn:
- Facebook:
- Instagram:
- Medium:

Melanie Mitchell: Concepts, Analogies, Common Sense & Future of AI | Artificial Intelligence Podcast

Melanie Mitchell is a professor of computer science at Portland State University and an external professor at Santa Fe Institute. She has worked on and written about artificial intelligence from fascinating perspectives including adaptive complex systems, genetic algorithms, and the Copycat cognitive architecture which places the process of analogy making at the core of human cognition. From her doctoral work with her advisors Douglas Hofstadter and John Holland to today, she has contributed a lot of important ideas to the field of AI, including her recent book, simply called Artificial Intelligence: A Guide for Thinking Humans. This conversation is part of the Artificial Intelligence podcast.

This episode is presented by Cash App. Download it & use code LexPodcast:
Cash App (App Store):
Cash App (Google Play):

INFO:
Podcast website:

Apple Podcasts:

Spotify:

RSS:

Full episodes playlist:

Clips playlist:


EPISODE LINKS:
AI: A Guide for Thinking Humans (book) -
Melanie Twitter:

OUTLINE:
0:00 - Introduction
2:33 - The term artificial intelligence
6:30 - Line between weak and strong AI
12:46 - Why have people dreamed of creating AI?
15:24 - Complex systems and intelligence
18:38 - Why are we bad at predicting the future with regard to AI?
22:05 - Are fundamental breakthroughs in AI needed?
25:13 - Different AI communities
31:28 - Copycat cognitive architecture
36:51 - Concepts and analogies
55:33 - Deep learning and the formation of concepts
1:09:07 - Autonomous vehicles
1:20:21 - Embodied AI and emotion
1:25:01 - Fear of superintelligent AI
1:36:14 - Good test for intelligence
1:38:09 - What is complexity?
1:43:09 - Santa Fe Institute
1:47:34 - Douglas Hofstadter
1:49:42 - Proudest moment

CONNECT:
- Subscribe to this YouTube channel
- Twitter:
- LinkedIn:
- Facebook:
- Instagram:
- Medium:
- Support on Patreon:

Daniel Kahneman: Experiencing Self and Remembering Self | AI Podcast Clips

Full episode with Daniel Kahneman (Jan 2020):
Clips channel (Lex Clips):
Main channel (Lex Fridman):
(more links below)

Podcast full episodes playlist:


Podcasts clips playlist:


Podcast website:


Podcast on Apple Podcasts (iTunes):


Podcast on Spotify:


Podcast RSS:


Daniel Kahneman is winner of the Nobel Prize in economics for his integration of economic science with the psychology of human behavior, judgment and decision-making. He is the author of the popular book Thinking, Fast and Slow that summarizes in an accessible way his research of several decades, often in collaboration with Amos Tversky, on cognitive biases, prospect theory, and happiness. The central thesis of this work is a dichotomy between two modes of thought: System 1 is fast, instinctive and emotional; System 2 is slower, more deliberative, and more logical. The book delineates cognitive biases associated with each type of thinking.

Subscribe to this YouTube channel or connect on:
- Twitter:
- LinkedIn:
- Facebook:
- Instagram:
- Medium:
- Support on Patreon:

Cristos Goodrow: YouTube Algorithm | Artificial Intelligence (AI) Podcast

Cristos Goodrow is VP of Engineering at Google and head of Search and Discovery at YouTube (aka YouTube Algorithm). This conversation is part of the Artificial Intelligence podcast.

This episode is presented by Cash App. Download it & use code LexPodcast:
Cash App (App Store):
Cash App (Google Play):

INFO:
Podcast website:

Apple Podcasts:

Spotify:

RSS:

Full episodes playlist:

Clips playlist:


OUTLINE:
0:00 - Introduction
3:26 - Life-long trajectory through YouTube
7:30 - Discovering new ideas on YouTube
13:33 - Managing healthy conversation
23:02 - YouTube Algorithm
38:00 - Analyzing the content of video itself
44:38 - Clickbait thumbnails and titles
47:50 - Feeling like I'm helping the YouTube algorithm get smarter
50:14 - Personalization
51:44 - What does success look like for the algorithm?
54:32 - Effect of YouTube on society
57:24 - Creators
59:33 - Burnout
1:03:27 - YouTube algorithm: heuristics, machine learning, human behavior
1:08:36 - How to make a viral video?
1:10:27 - Veritasium: Why Are 96,000,000 Black Balls on This Reservoir?
1:13:20 - Making clips from long-form podcasts
1:18:07 - Moment-by-moment signal of viewer interest
1:20:04 - Why is video understanding such a difficult AI problem?
1:21:54 - Self-supervised learning on video
1:25:44 - What does YouTube look like 10, 20, 30 years from now?

CONNECT:
- Subscribe to this YouTube channel
- Twitter:
- LinkedIn:
- Facebook:
- Instagram:
- Medium:
- Support on Patreon:

Sebastian Thrun: Flying Cars, Autonomous Vehicles, and Education | Artificial Intelligence Podcast

Sebastian Thrun is one of the greatest roboticists, computer scientists, and educators of our time. He led development of the autonomous vehicles at Stanford that won the 2005 DARPA Grand Challenge and placed second in the 2007 DARPA Urban Challenge. He then led the Google self-driving car program which launched the self-driving revolution. He taught the popular Stanford course on Artificial Intelligence in 2011 which was one of the first MOOCs. That experience led him to co-found Udacity, an online education platform. He is also the CEO of Kitty Hawk, a company working on building flying cars or more technically eVTOLS which stands for electric vertical take-off and landing aircraft. This conversation is part of the Artificial Intelligence podcast.

This episode is presented by Cash App: download it & use code LexPodcast:
Cash App (App Store):
Cash App (Google Play):

INFO:
Podcast website:

Apple Podcasts:

Spotify:

RSS:

Full episodes playlist:

Clips playlist:


EPISODE LINKS:
Sebastian Twitter:
Udacity:
Kitty Hawk:

OUTLINE:
0:00 - Introduction
3:24 - The Matrix
4:39 - Predicting the future 30+ years ago
6:14 - Machine learning and expert systems
9:18 - How to pick what ideas to work on
11:27 - DARPA Grand Challenges
17:33 - What does it take to be a good leader?
23:44 - Autonomous vehicles
38:42 - Waymo and Tesla Autopilot
42:11 - Self-Driving Car Nanodegree
47:29 - Machine learning
51:10 - AI in medical applications
54:06 - AI-related job loss and education
57:51 - Teaching soft skills
1:00:13 - Kitty Hawk and flying cars
1:08:22 - Love and AI
1:13:12 - Life

CONNECT:
- Subscribe to this YouTube channel
- Twitter:
- LinkedIn:
- Facebook:
- Instagram:
- Medium:
- Support on Patreon:

David Chalmers: The Hard Problem of Consciousness | Artificial Intelligence (AI) Podcast

David Chalmers is a philosopher and cognitive scientist specializing in philosophy of mind, philosophy of language, and consciousness. He is perhaps best known for formulating the hard problem of consciousness which could be stated as why does the feeling which accompanies awareness of sensory information exist at all? This conversation is part of the Artificial Intelligence podcast.

This episode is presented by Cash App. Download it & use code LexPodcast:
Cash App (App Store):
Cash App (Google Play):

INFO:
Podcast website:

Apple Podcasts:

Spotify:

RSS:

Full episodes playlist:

Clips playlist:


OUTLINE:
0:00 - Introduction
2:23 - Nature of reality: Are we living in a simulation?
19:19 - Consciousness in virtual reality
27:46 - Music-color synesthesia
31:40 - What is consciousness?
51:25 - Consciousness and the meaning of life
57:33 - Philosophical zombies
1:01:38 - Creating the illusion of consciousness
1:07:03 - Conversation with a clone
1:11:35 - Free will
1:16:35 - Meta-problem of consciousness
1:18:40 - Is reality an illusion?
1:20:53 - Descartes' evil demon
1:23:20 - Does AGI need conscioussness?
1:33:47 - Exciting future
1:35:32 - Immortality

CONNECT:
- Subscribe to this YouTube channel
- Twitter:
- LinkedIn:
- Facebook:
- Instagram:
- Medium:
- Support on Patreon:
x

System 1 v System 2 Cognition in Machine Learning - Yoshua Bengio (3/4)

Part 1 -
Part 2 -

In this section of his keynote address, Yoshua explains System 1 v System 2 cognition in regard to Machine Learning. Yoshua explains the ability to reason, consciously construct, explanation of machine thoughts & overviewing causality in Machine Learning.

Yann LeCun: Deep Learning, Convolutional Neural Networks, and Self-Supervised Learning | AI Podcast

Yann LeCun is one of the fathers of deep learning, the recent revolution in AI that has captivated the world with the possibility of what machines can learn from data. He is a professor at New York University, a Vice President & Chief AI Scientist at Facebook, co-recipient of the Turing Award for his work on deep learning. He is probably best known as the founding father of convolutional neural networks, in particular their early application to optical character recognition. This conversation is part of the Artificial Intelligence podcast.

INFO:
Podcast website:
Full episodes playlist:
Clips playlist:

EPISODE LINKS:
Yann's Facebook:
Yann's Twitter:
Yann's Website:

OUTLINE:
0:00 - Introduction
1:11 - HAL 9000 and Space Odyssey 2001
7:49 - The surprising thing about deep learning
10:40 - What is learning?
18:04 - Knowledge representation
20:55 - Causal inference
24:43 - Neural networks and AI in the 1990s
34:03 - AGI and reducing ideas to practice
44:48 - Unsupervised learning
51:34 - Active learning
56:34 - Learning from very few examples
1:00:26 - Elon musk: deep learning and autonomous driving
1:03:00 - Next milestone for human-level intelligence
1:08:53 - Her
1:14:26 - Question for an AGI system

CONNECT:
- Subscribe to this YouTube channel
- Twitter:
- LinkedIn:
- Facebook:
- Instagram:
- Medium:
- Support on Patreon:

Ayanna Howard: Human-Robot Interaction and Ethics of Safety-Critical Systems | AI Podcast

Ayanna Howard is a roboticist and professor at Georgia Tech, director of Human-Automation Systems lab, with research interests in human-robot interaction, assistive robots in the home, therapy gaming apps, and remote robotic exploration of extreme environments. This conversation is part of the Artificial Intelligence podcast.

This episode is presented by Cash App. Download it & use code LexPodcast:
Cash App (App Store):
Cash App (Google Play):

INFO:
Podcast website:

Apple Podcasts:

Spotify:

RSS:

Full episodes playlist:

Clips playlist:


EPISODE LINKS:
Ayanna Website:
Ayanna Twitter:

OUTLINE:
0:00 - Introduction
2:09 - Favorite robot
5:05 - Autonomous vehicles
8:43 - Tesla Autopilot
20:03 - Ethical responsibility of safety-critical algorithms
28:11 - Bias in robotics
38:20 - AI in politics and law
40:35 - Solutions to bias in algorithms
47:44 - HAL 9000
49:57 - Memories from working at NASA
51:53 - SpotMini and Bionic Woman
54:27 - Future of robots in space
57:11 - Human-robot interaction
1:02:38 - Trust
1:09:26 - AI in education
1:15:06 - Andrew Yang, automation, and job loss
1:17:17 - Love, AI, and the movie Her
1:25:01 - Why do so many robotics companies fail?
1:32:22 - Fear of robots
1:34:17 - Existential threats of AI
1:35:57 - Matrix
1:37:37 - Hang out for a day with a robot

CONNECT:
- Subscribe to this YouTube channel
- Twitter:
- LinkedIn:
- Facebook:
- Instagram:
- Medium:
- Support on Patreon:

Rohit Prasad: Amazon Alexa and Conversational AI | Artificial Intelligence (AI) Podcast

Rohit Prasad is the vice president and head scientist of Amazon Alexa and one of its original creators. This conversation is part of the Artificial Intelligence podcast.

This episode is presented by Cash App: download it & use code LexPodcast
This episode is also supported by ZipRecruiter. Try it:

INFO:
Podcast website:

Apple Podcasts:

Spotify:

RSS:

Full episodes playlist:

Clips playlist:


OUTLINE:
0:00 - Introduction
4:34 - Her
6:31 - Human-like aspects of smart assistants
8:39 - Test of intelligence
13:04 - Alexa prize
21:35 - What does it take to win the Alexa prize?
27:24 - Embodiment and the essence of Alexa
34:35 - Personality
36:23 - Personalization
38:49 - Alexa's backstory from her perspective
40:35 - Trust in Human-AI relations
44:00 - Privacy
47:45 - Is Alexa listening?
53:51 - How Alexa started
54:51 - Solving far-field speech recognition and intent understanding
1:11:51 - Alexa main categories of skills
1:13:19 - Conversation intent modeling
1:17:47 - Alexa memory and long-term learning
1:22:50 - Making Alexa sound more natural
1:27:16 - Open problems for Alexa and conversational AI
1:29:26 - Emotion recognition from audio and video
1:30:53 - Deep learning and reasoning
1:36:26 - Future of Alexa
1:41:47 - The big picture of conversational AI

CONNECT:
- Subscribe to this YouTube channel
- Twitter:
- LinkedIn:
- Facebook:
- Instagram:
- Medium:
- Support on Patreon:

Stephen Kotkin: Stalin's Rise to Power | AI Podcast Clips

Full episode with Stephen Kotkin (Jan 2020):
Clips channel (Lex Clips):
Main channel (Lex Fridman):
(more links below)

Podcast full episodes playlist:


Podcasts clips playlist:


Podcast website:


Podcast on Apple Podcasts (iTunes):


Podcast on Spotify:


Podcast RSS:


Stephen Kotkin is a professor of history at Princeton university and one of the great historians of our time, specializing in Russian and Soviet history. He has written many books on Stalin and the Soviet Union including the first 2 of a 3 volume work on Stalin, and he is currently working on volume 3.

Subscribe to this YouTube channel or connect on:
- Twitter:
- LinkedIn:
- Facebook:
- Instagram:
- Medium:
- Support on Patreon:
x

Jim Gates: What is String Theory, Its Status, Its Open Challenges? | AI Podcast Clips

Full episode with Jim Gates (Dec 2019):
Clips channel (Lex Clips):
Main channel (Lex Fridman):
(more links below)

Podcast full episodes playlist:


Podcasts clips playlist:


Podcast website:


Podcast on Apple Podcasts (iTunes):


Podcast on Spotify:


Podcast RSS:


Jim Gates (S James Gates Jr.) is a theoretical physicist and professor at Brown University working on supersymmetry, supergravity, and superstring theory. He served on former President Obama's Council of Advisors on Science and Technology. He is the co-author of a new book titled Proving Einstein Right about the scientists who set out to prove Einstein's theory of relativity.

Subscribe to this YouTube channel or connect on:
- Twitter:
- LinkedIn:
- Facebook:
- Instagram:
- Medium:
- Support on Patreon:

MIT Deep Learning Basics: Introduction and Overview

An introductory lecture for MIT course 6.S094 on the basics of deep learning including a few key ideas, subfields, and the big picture of why neural networks have inspired and energized an entire new generation of researchers. For more lecture videos on deep learning, reinforcement learning (RL), artificial intelligence (AI & AGI), and podcast conversations, visit our website or follow TensorFlow code tutorials on our GitHub repo.

INFO:
Website:
GitHub:
Slides:
Playlist:
Blog post:

OUTLINE:
0:00 - Introduction
0:53 - Deep learning in one slide
4:55 - History of ideas and tools
9:43 - Simple example in TensorFlow
11:36 - TensorFlow in one slide
13:32 - Deep learning is representation learning
16:02 - Why deep learning (and why not)
22:00 - Challenges for supervised learning
38:27 - Key low-level concepts
46:15 - Higher-level methods
1:06:00 - Toward artificial general intelligence

CONNECT:
- If you enjoyed this video, please subscribe to this channel.
- Twitter:
- LinkedIn:
- Facebook:
- Instagram:

Noam Chomsky: Language, Cognition, and Deep Learning | Artificial Intelligence (AI) Podcast

Noam Chomsky is one of the greatest minds of our time and is one of the most cited scholars in history. He is a linguist, philosopher, cognitive scientist, historian, social critic, and political activist. He has spent over 60 years at MIT and recently also joined the University of Arizona. This conversation is part of the Artificial Intelligence podcast.

As I explain in the introduction, due to an unfortunate mishap, this conversation is audio-only. Hope you still enjoy it and find it interesting.

This episode is presented by Cash App: download it & use code LexPodcast

INFO:
Podcast website:

Apple Podcasts:

Spotify:

RSS:

Full episodes playlist:

Clips playlist:


OUTLINE:
0:00 - Introduction
3:59 - Common language with an alience species
5:46 - Structure of language
7:18 - Roots of language in our brain
8:51 - Language and thought
9:44 - The limit of human cognition
16:48 - Neuralink
19:32 - Deepest property of language
22:13 - Limits of deep learning
28:01 - Good and evil
29:52 - Memorable experiences
33:29 - Mortality
34:23 - Meaning of life

CONNECT:
- Subscribe to this YouTube channel
- Twitter:
- LinkedIn:
- Facebook:
- Instagram:
- Medium:
- Support on Patreon:

Isaac Asimov: Why Do People Connect with Science Fiction?

This is a clip of Isaac Asimov from 1975.
Full video:

This is a clip on the Lex Clips channel that I mostly use to post video clips from the Artificial Intelligence podcast, but occasionally I post clips from other lectures by me or others. Hope you find these interesting, thought-provoking, and inspiring. If you do, please subscribe, click bell icon, and share.

Lex Clips channel:


Lex Fridman channel:


Connect with on social media:
- Twitter:
- LinkedIn:
- Facebook:
- Instagram:

Stephen Kotkin: Stalin, Putin, and the Nature of Power | Artificial Intelligence (AI) Podcast

Stephen Kotkin is a professor of history at Princeton university and one of the great historians of our time, specializing in Russian and Soviet history. He has written many books on Stalin and the Soviet Union including the first 2 of a 3 volume work on Stalin, and he is currently working on volume 3. This conversation is part of the Artificial Intelligence podcast.

This episode is presented by Cash App. Download it & use code LexPodcast:
Cash App (App Store):
Cash App (Google Play):

INFO:
Podcast website:

Apple Podcasts:

Spotify:

RSS:

Full episodes playlist:

Clips playlist:


EPISODE LINKS:
Stalin (book, vol 1):
Stalin (book, vol 2):

OUTLINE:
0:00 - Introduction
3:10 - Do all human beings crave power?
11:29 - Russian people and authoritarian power
15:06 - Putin and the Russian people
23:23 - Corruption in Russia
31:30 - Russia's future
41:07 - Individuals and institutions
44:42 - Stalin's rise to power
1:05:20 - What is the ideal political system?
1:21:10 - Questions for Putin
1:29:41 - Questions for Stalin
1:33:25 - Will there always be evil in the world?

CONNECT:
- Subscribe to this YouTube channel
- Twitter:
- LinkedIn:
- Facebook:
- Instagram:
- Medium:
- Support on Patreon:

Menu