stat946F18/differentiableplasticity: Difference between revisions
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
'''Differentiable Plasticity''' | '''Differentiable Plasticity''' | ||
= Presented by = | |||
1. Ganapathi Subramanian, Sriram | |||
= Motivation = | |||
1. Neural Networks which is the basis of the modern artificial intelligence techniques, is static in nature in terms of architecture. Once a Neural Network is trained the network architecture components (ex. network connections) cannot be changed and thus effectively learning stops with the training step. If a different task needs to be considered, then the agent must be trained again from scratch. | |||
2. Plasticity is the characteristic of biological systems like humans, which is capable of changing the network connections over time. This enables lifelong learning in biological systems and thus is capable of adapting to dynamic changes in the environment with great sample efficiency in the data observed. This is called synaptic plasticity which is based on the Hebb's rule i.e. If a neuron repeatedly takes part in making another neuron fire, the connection between them is strengthened. | |||
3. Differential plasticity is a step in this direction. The plastic connections' behavior is trained using gradient descent so that the previously trained networks can adapt to changing conditions. | |||
Example: Using the current state of the art supervised learning examples, we can train Neural Networks to recognize specific letters that it has seen during training. Using lifelong learning the agent can know about any alphabet, including those that it has never been exposed to during training. | |||
= Objectives = | |||
The paper has the following objectives: | |||
1. To tackle to problem of meta-learning (learning to learn). | |||
2. To design neural networks with plastic connections with a special emphasis on gradient descent capability for backpropagation training. | |||
3. To use Backpropagation to optimize both the base weights and the amount of plasticity in each connection. | |||
4. To demonstrate the performance of such networks on three complex and different domains namely complex pattern memorization, one shot classification and reinforcement learning. |
Revision as of 12:37, 18 October 2018
Differentiable Plasticity
Presented by
1. Ganapathi Subramanian, Sriram
Motivation
1. Neural Networks which is the basis of the modern artificial intelligence techniques, is static in nature in terms of architecture. Once a Neural Network is trained the network architecture components (ex. network connections) cannot be changed and thus effectively learning stops with the training step. If a different task needs to be considered, then the agent must be trained again from scratch.
2. Plasticity is the characteristic of biological systems like humans, which is capable of changing the network connections over time. This enables lifelong learning in biological systems and thus is capable of adapting to dynamic changes in the environment with great sample efficiency in the data observed. This is called synaptic plasticity which is based on the Hebb's rule i.e. If a neuron repeatedly takes part in making another neuron fire, the connection between them is strengthened.
3. Differential plasticity is a step in this direction. The plastic connections' behavior is trained using gradient descent so that the previously trained networks can adapt to changing conditions.
Example: Using the current state of the art supervised learning examples, we can train Neural Networks to recognize specific letters that it has seen during training. Using lifelong learning the agent can know about any alphabet, including those that it has never been exposed to during training.
Objectives
The paper has the following objectives:
1. To tackle to problem of meta-learning (learning to learn).
2. To design neural networks with plastic connections with a special emphasis on gradient descent capability for backpropagation training.
3. To use Backpropagation to optimize both the base weights and the amount of plasticity in each connection.
4. To demonstrate the performance of such networks on three complex and different domains namely complex pattern memorization, one shot classification and reinforcement learning.