Notes and Comments on Yann LeCun Interview with Lex Fridman

Vilson Vieira
5 min readApr 9, 2021

While watching a interview with Yann LeCun I took some notes and written down some comments of my own, together with useful references to works cited by LeCun. Other people shared valuable notes already and here it goes my free interpretations. They can run wild sometimes and are probably wrong or open to debate, so please feel free to let me know if I misunderstood some concepts.

Do not believe on well defined logic/symbolic rules, but on gradient-based learning.

Logic representation and graphs are too rigid and too brittle/fragile.

Main issue is knowledge acquisition: how to reduce a bunch of data into a graph of this type? Relies on experts to create those rules.

Representing knowledge with symbols and manipulating symbols with logic is incompatible with learning

We faced those problems already at Extend AI (and previously at The Grid) while applying Constraint Programming to tasks like 2D layout of web sites or floor plan design. It's really hard, time consuming and unpractical to depend on humans to define formal rules or constraints. However, it seems to exist room to train ML models to define constraints (like in autonomous Probabilistic Programming Languages) from data.

Geoffrey Hinton advocates replacing symbols by vectors of probabilities and logic by continuous functions as a way to make it compatible with learning

León Bottou: a reasoning system should be able to manipulate objects in a space and result in objects in the same space, so it can store it again on the same memory space: https://arxiv.org/pdf/1102.1808.pdf

Brings the notion of self-programmable models and if we force ourselves to build up on Neural Networks structures, Connectionism can probably embrace/simulate Symbolism. Maybe here the notion of Software 2.0 can really shine and emerge from.

ML is the science of sloppiness :-)

Neural Networks are capable of reasoning, intuition is based on the fact that we already have one that works: our brain ;-)

We need a memory “device” to store large amount of factual memories during a certain period of time.

Three types of memory:

1. On visual cortex, only stores state for about 20 seconds and then vanishes

2. On hippocampus, short term memory, eg. partial map of current place, last sentences somebody said minutes ago.

3. On synapses, long term memory

We’re interested in hippocampus as memory-like device.

Deep Nets with some kind of memory mechanism like — LSTM, Attention or Transformer — seem trying to recreate the hippocampus at some level. In this way a model can get data from memory, crunch on it and store it back, repeat. Chains of facts can be recursive and we need to iterate through it, updating knowledge/memory.

Let’s get a simulation, a game… if we can simulate almost everything in this closed system we solved the data generation problem, not the “blurry” predictions.

We can generate data, OK, however, some unpredictable situation/event can happen, and the resulting prediction of model will be blurry prediction (i.e. all possible states/weights it could predict, so blurry). Something is missing, how to deal with those blurry predictions?

Hypothesis: model-based reinforcement learning. We can have some predictive models trained on more general (or specialized?) contexts like simple physics so we can “ask” for those models help to un-blurry the prediction.

Sounds similar to what prof. Josh Tenenbaum says about having simple physics simulation engine mimicking reality, so we can understand in advance how some events should be, or what happens if we perform some actions.

However it's hard to know which and how many are those “primitive” models.

RL should not just be stupid like training 200 humans years a player to beat Starcraft. Imagine how we learn to drive a car. We already learned (as a predictive model, when we were 8–9 months old) that things can fall of tables, so when we were learning to drive we didn’t push ourselves out of cliffs because we already know that the car would fall out the cliff the same way block pieces fallen from tables when we were babies.

Emmanuel Dupoux: observations of baby concepts acquisition during months of life. Source: https://www.slideshare.net/rouyunpan/deep-learning-hardware-past-present-future

We need RL systems with such predictive models that allow us to avoid doing stupid things.

For driving a car, we still need imitation learning because we’re learning from some instructor, but most of the time it’s all about learning a world model (mechanical physics, for instance). And such predictive models can be transferred from scene to scene because stupid things happens everywhere.

So the main question is: how do we learn models of the world?! That’s what self-supervised learning is trying to help.

A basic Self-supervised Learning model in that context could take some amount of labeled and unlabeled data, train a supervised model on labeled data and run it on both labeled/unlabeled samples. If average loss drops, the new samples can be part of labeled data now.

That brings the question of how to deal with false positives because they could be taken as valid training data, maybe that's why self-supervised models are commonly applied to sequential data in “fill the blank” problems.

An Autonomous Intelligent System (with possibly human level intelligence) should have three pieces:

1. World Model: update-able, self-supervised learnable.

2. Objective Function: based on basal ganglia (level of contentment).

3. Predictive System / Policy Maker: observe world at time t; what happens to the world if I take this action; collect many answers represented with some uncertainty level (distributions?); run model for some situations/hypothesis; min/max objective function (by energy minimization?). The goal is to answer: what sequence of actions I need to take to increase contentment level.

To be really autonomous, do such world models need to be able to program themselves? Interesting that emotions like fear and happiness become protagonists on guiding actions.

Question to an human-level intelligent system: what’s the wind? Needs to understand world observation, adapt its model to corrected one and show some level of causality reasoning

No need for embodiment but grounding.

It seems some road to human-level intelligence would be a model-based Reinforcement Learning system running on a world simulation with models being trained as Self-supervised Learning models.

We're only scratching the surface of such models, but the journey ahead is fascinating! Please follow me on Twitter where I share more about ML and AI in general!

--

--

Vilson Vieira

ML Engineer at Anything.World building the 3D AGI. Prev: SWE at Google, Mozilla. Passionate about AI. More at: https://void.cc & https://hackable.space