00:00:00 - Introduction
00:00:25 - Neuroscience inspired early works on Machine Learning & AI
00:01:47 - Supervised Learning
00:03:18 - Deep Learning = The Entire Machine is Trainable
00:04:06 - Convolutional Network Architecture [LeCun et al. NIPS 1989]
00:05:56 - Convolutional Network [vintage 1990]
00:07:51 - Deep Convolutional Nets for Object Recognition
00:09:39 - Deep Learning = Learning Hierarchical Representations
00:10:31 - Computing Gradients by Back-Propagation
00:10:57 - Large-Scale Deep Neural Networks: the reality
00:11:14 - Supervised Convolutional Nets
00:11:44 - Very Deep ConvNet Architectures
00:12:14 - Hierarchical Structure in the Visual Cortex
00:12:54 - Image captioning, Semantic Segmentation with ConvNets
00:13:38 - Driving Cars with Convolutional Nets
00:18:02 - DeepMask : ConvNet Locates and Recognizes Objects
00:18:36 - DeepMask++ Proposals
00:19:28 - Memory-Augmented Networks
00:20:22 - Augmenting Neural Nets with a Memory Module
00:20:53 - Memory Network [Weston, Chopra, Bordes 2014]
00:22:04 - End-to-End Memory Network on bAbI tasks [Weston 2015]
00:22:51 - Obstacles to AI
00:23:36 - Obstacles to Progress in AI
00:24:43 - What is Common Sense ?
00:27:04 - How Much Information Does the Machine Need to Predict ?
00:29:00 - The Architecture Of an Intelligent System
00:29:02 - AI System: Learning Agent + Immutable Objective
00:29:52 - AI System: Predicting + Planning = Reasoning
00:30:35 - The Hard Part : Prediction Under Uncertainty
00:31:56 - Energy-Based Unsupervised Learning
00:32:58 - DCGAN : “reverse” ConvNet maps random vectors to images
00:33:33 - Navigating the Manifold
00:33:50 - Face Algebra [in DCGAN space]
00:34:36 - Energy-Based GAN [Zhao, Mathieu, LeCun: arXiv :1609.03.126 ]
00:35:31 - Multi-Scale ConvNet for Video Prediction
00:37:29 - Let's be inspired by nature, but not too much
© Académie des sciences - Tous droits réservés
00:00:25 - Neuroscience inspired early works on Machine Learning & AI
00:01:47 - Supervised Learning
00:03:18 - Deep Learning = The Entire Machine is Trainable
00:04:06 - Convolutional Network Architecture [LeCun et al. NIPS 1989]
00:05:56 - Convolutional Network [vintage 1990]
00:07:51 - Deep Convolutional Nets for Object Recognition
00:09:39 - Deep Learning = Learning Hierarchical Representations
00:10:31 - Computing Gradients by Back-Propagation
00:10:57 - Large-Scale Deep Neural Networks: the reality
00:11:14 - Supervised Convolutional Nets
00:11:44 - Very Deep ConvNet Architectures
00:12:14 - Hierarchical Structure in the Visual Cortex
00:12:54 - Image captioning, Semantic Segmentation with ConvNets
00:13:38 - Driving Cars with Convolutional Nets
00:18:02 - DeepMask : ConvNet Locates and Recognizes Objects
00:18:36 - DeepMask++ Proposals
00:19:28 - Memory-Augmented Networks
00:20:22 - Augmenting Neural Nets with a Memory Module
00:20:53 - Memory Network [Weston, Chopra, Bordes 2014]
00:22:04 - End-to-End Memory Network on bAbI tasks [Weston 2015]
00:22:51 - Obstacles to AI
00:23:36 - Obstacles to Progress in AI
00:24:43 - What is Common Sense ?
00:27:04 - How Much Information Does the Machine Need to Predict ?
00:29:00 - The Architecture Of an Intelligent System
00:29:02 - AI System: Learning Agent + Immutable Objective
00:29:52 - AI System: Predicting + Planning = Reasoning
00:30:35 - The Hard Part : Prediction Under Uncertainty
00:31:56 - Energy-Based Unsupervised Learning
00:32:58 - DCGAN : “reverse” ConvNet maps random vectors to images
00:33:33 - Navigating the Manifold
00:33:50 - Face Algebra [in DCGAN space]
00:34:36 - Energy-Based GAN [Zhao, Mathieu, LeCun: arXiv :1609.03.126 ]
00:35:31 - Multi-Scale ConvNet for Video Prediction
00:37:29 - Let's be inspired by nature, but not too much
© Académie des sciences - Tous droits réservés
Aucun commentaire:
Enregistrer un commentaire
Remarque : Seul un membre de ce blog est autorisé à enregistrer un commentaire.